问题
So I'm confused, about why my WebClient
is not accessing its DownloadStringCompleted
. After reading the possible problems that the WebClient
is being disposed, before it can finish its download. Or that exceptions are not being caught during DownloadData
or that simply the Uri is inaccessible.
I've checked against all these problems, and my WebClient
has not yet accessed its DownloadStringCompleted
.
PMID WebClient Class
/// <summary>
/// Construct a new curl
/// </summary>
/// <param name="pmid">pmid value</param>
public PMIDCurl(int pmid)
{
this.pmid = pmid;
StringBuilder pmid_url_string = new StringBuilder();
pmid_url_string.Append("http://www.ncbi.nlm.nih.gov/pubmed/").Append(pmid.ToString()).Append("?report=xml");
this.pmid_url = new Uri(pmid_url_string.ToString());
}
/// <summary>
/// Curl data from the PMID
/// </summary>
public void CurlPMID()
{
WebClient client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(HttpsCompleted);
client.DownloadData(this.pmid_url);
}
/// <summary>
/// Information to store in the class after the curl
/// </summary>
public string AbstractTitle { get; set; }
public string AbstractText { get; set; }
/// <summary>
/// Retrieve the data from an xml file about a PMID
/// </summary>
/// <param name="sender">System Generated</param>
/// <param name="e">System Generated</param>
private void HttpsCompleted(object sender, DownloadStringCompletedEventArgs e)
{
if (e.Error == null)
{
PMIDCrawler pmc = new PMIDCrawler(e.Result, "/pre/PubmedArticle/MedlineCitation/Article");
//iterate over each node in the file
foreach (XmlNode xmlNode in pmc.crawl)
{
this.AbstractTitle = xmlNode["ArticleTitle"].InnerText;
this.AbstractText = xmlNode["Abstract"]["AbstractText"].InnerText;
}
}
} //close httpsCompleted
PMID NodeList Constructor Class
/// <summary>
/// List initialized by crawer
/// </summary>
public XmlNodeList crawl { get; set; }
/// <summary>
/// Constructor for the HTML to XML converter
/// </summary>
/// <param name="nHtml"></param>
/// <param name="nodeList"></param>
public PMIDCrawler(string nHtml, string nodeList)
{
//parse it from e
string html = HttpUtility.HtmlDecode(nHtml);
XDocument htmlDoc = XDocument.Parse(html, LoadOptions.None);
//convert the xdocument to an xmldocument
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.Load(htmlDoc.CreateReader());
//load the xmlDocument into a nodelist
XmlElement xmlRoot = xmlDoc.DocumentElement;
this.crawl = xmlRoot.SelectNodes(nodeList);
}
Any ideas on why the DonwloadStringCompleted
is never reached?
回答1:
You have several issues with your CurlPMID
code. I have put comments in the code below.
public void CurlPMID()
{
// 1. The variable 'client' loses scope when this function exits.
// You may want to consider making it a class variable, so it doesn't
// get disposed early.
WebClient client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(HttpsCompleted);
// 2. You are calling the synchronous version of the download function.
// The synchronous version does not call any completion handlers.
// When the synchronous call returns, the download has completed.
// 3. You are calling the wrong function here. Based on your completion handler,
// you should be calling DownloadStringAsync(). If you want synchronous
// behavior, call DownloadString() instead.
client.DownloadData(this.pmid_url);
}
In short, assuming you want async behavior, your CurlPMID
function should look like:
public void CurlPMID()
{
WebClient client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(HttpsCompleted);
client.DownloadStringAsync(this.pmid_url);
}
来源:https://stackoverflow.com/questions/15374328/webclient-never-reaching-downloadstringcompleted-on-wpf-application