lwp

How to POST content with an HTTP Request (Perl)

风流意气都作罢 提交于 2019-12-03 06:11:09
use LWP::UserAgent; use Data::Dumper; my $ua = new LWP::UserAgent; $ua->agent("AgentName/0.1 " . $ua->agent); my $req = new HTTP::Request POST => 'http://example.com'; $req->content('port=8', 'target=64'); #problem my $res = $ua->request($req); print Dumper($res->content); How can I send multiple pieces of content using $req->content? What kind of data does $req->content expect? It only sends the last one. Edit: Found out if i format it like 'port=8&target=64' it works. Is there a better way? my $ua = LWP::UserAgent->new(); my $request = POST( $url, [ 'port' => 8, 'target' => 64 ] ); my

“get” not working in perl

て烟熏妆下的殇ゞ 提交于 2019-12-02 17:26:41
问题 I'm new to perl. In the past few days, I've made some simple scripts that save websites' source codes to my computer via "get." They do what they're supposed to, but will not get the content of a website which is a forum. Non-forum websites work just fine. Any idea what's going on? Here's the problem chunk: my $url = 'http://www.computerforum.com/'; my $content = get $url || die "Unable to get content"; 回答1: http://p3rl.org/LWP::Simple#get: The get() function will fetch the document

“get” not working in perl

旧城冷巷雨未停 提交于 2019-12-02 10:54:21
I'm new to perl. In the past few days, I've made some simple scripts that save websites' source codes to my computer via "get." They do what they're supposed to, but will not get the content of a website which is a forum. Non-forum websites work just fine. Any idea what's going on? Here's the problem chunk: my $url = 'http://www.computerforum.com/'; my $content = get $url || die "Unable to get content"; http://p3rl.org/LWP::Simple#get : The get() function will fetch the document identified by the given URL and return it. It returns undef if it fails. […] You will not be able to examine the

How to scrape, using LWP and a regex, the date argument to a javascript function?

只谈情不闲聊 提交于 2019-12-02 06:16:00
问题 I'm having difficulty scraping dates from a specific web page because the date is apparently an argument passed to a javascript function. I have in the past written a few simple scrapers without any major issues so I didn't expect problems but I am struggling with this. The page has 5-6 dates in regular yyyy/mm/dd format like this dateFormat('2012/02/07') Ideally I would like to remove everything except the half-dozen dates, which I want to save in an array. At this point, I can't even

How to scrape, using LWP and a regex, the date argument to a javascript function?

那年仲夏 提交于 2019-12-01 23:00:14
I'm having difficulty scraping dates from a specific web page because the date is apparently an argument passed to a javascript function. I have in the past written a few simple scrapers without any major issues so I didn't expect problems but I am struggling with this. The page has 5-6 dates in regular yyyy/mm/dd format like this dateFormat('2012/02/07') Ideally I would like to remove everything except the half-dozen dates, which I want to save in an array. At this point, I can't even successfully get one date, let alone all of them. It is probably just a malformed regex that I have been

How do I fetch just the beginning of a Web page with LWP?

好久不见. 提交于 2019-12-01 11:16:11
Does anyone know the best way to fetch just 50% of the Web page on a GET or POST request? The Web page I fetch takes me 10, 20 seconds to completely load, and I only need to filter just a few lines from the beginning of the page. use 5.010; use strictures; use LWP::UserAgent qw(); my $content; LWP::UserAgent->new->get( $url, ':content_cb' => sub { my ($chunk, $res) = @_; state $length = $res->header('Content-Length'); $content .= $chunk; die if length($content) / $length > 0.5; }, ); If the web site in question suppplies the Content-Length header you can just ask how much data is going to be

How do I fetch just the beginning of a Web page with LWP?

回眸只為那壹抹淺笑 提交于 2019-12-01 09:16:55
问题 Does anyone know the best way to fetch just 50% of the Web page on a GET or POST request? The Web page I fetch takes me 10, 20 seconds to completely load, and I only need to filter just a few lines from the beginning of the page. 回答1: use 5.010; use strictures; use LWP::UserAgent qw(); my $content; LWP::UserAgent->new->get( $url, ':content_cb' => sub { my ($chunk, $res) = @_; state $length = $res->header('Content-Length'); $content .= $chunk; die if length($content) / $length > 0.5; }, ); 回答2

How can I handle proxy servers with LWP::Simple?

痞子三分冷 提交于 2019-12-01 07:36:44
How can I add proxy support to this script? use LWP::Simple; $url = "http://stackoverflow.com"; $word = "how to ask"; $content = get $url; if($content =~ m/$word/) { print "Found $word"; } Access the underlying LWP::UserAgent object and set the proxy. LWP::Simple exports the $ua variable so you can do that: use LWP::Simple qw( $ua get ); $ua->proxy( 'http', 'http://myproxy.example.com' ); my $content = get( 'http://www.example.com/' ); 来源: https://stackoverflow.com/questions/542588/how-can-i-handle-proxy-servers-with-lwpsimple

How can I handle proxy servers with LWP::Simple?

一曲冷凌霜 提交于 2019-12-01 05:02:51
问题 How can I add proxy support to this script? use LWP::Simple; $url = "http://stackoverflow.com"; $word = "how to ask"; $content = get $url; if($content =~ m/$word/) { print "Found $word"; } 回答1: Access the underlying LWP::UserAgent object and set the proxy. LWP::Simple exports the $ua variable so you can do that: use LWP::Simple qw( $ua get ); $ua->proxy( 'http', 'http://myproxy.example.com' ); my $content = get( 'http://www.example.com/' ); 来源: https://stackoverflow.com/questions/542588/how

Why can't I fetch wikipedia pages with LWP::Simple?

邮差的信 提交于 2019-12-01 03:28:34
I'm trying to fetch Wikipedia pages using LWP::Simple , but they're not coming back. This code: #!/usr/bin/perl use strict; use LWP::Simple; print get("http://en.wikipedia.org/wiki/Stack_overflow"); doesn't print anything. But if I use some other webpage, say http://www.google.com , it works fine. Is there some other name that I should be using to refer to Wikipedia pages? What could be going on here? Apparently Wikipedia blocks LWP::Simple requests: http://www.perlmonks.org/?node_id=695886 The following works instead: #!/usr/bin/perl use strict; use LWP::UserAgent; my $url = "http://en