mediawiki

How do you grab an article including the links in a usable format?

懵懂的女人 提交于 2020-01-15 07:04:58
问题 I have an internal deployment of mediawiki. In some articles are external links. I have another page that makes API calls to the wiki to pull articles into another website. When I pull those articles in, links do not get pulled in properly. Here is an example. Wiki article: Use [http://example.com THIS LINK] to contact the vendor. API URL: https://mysite.com/mediawiki/api.php?action=query&format=json&prop=extracts&titles=Vendor API results: Use THIS LINK to contact the vendor. Notice the link

How do you grab an article including the links in a usable format?

℡╲_俬逩灬. 提交于 2020-01-15 07:04:39
问题 I have an internal deployment of mediawiki. In some articles are external links. I have another page that makes API calls to the wiki to pull articles into another website. When I pull those articles in, links do not get pulled in properly. Here is an example. Wiki article: Use [http://example.com THIS LINK] to contact the vendor. API URL: https://mysite.com/mediawiki/api.php?action=query&format=json&prop=extracts&titles=Vendor API results: Use THIS LINK to contact the vendor. Notice the link

How can I get MediaWiki to ignore page views from a Google Search Appliance?

两盒软妹~` 提交于 2020-01-14 20:41:52
问题 The page view counter on each MediaWiki page seems like a great way to identify popular pages which are worth putting more effort into keeping up-to-date and useful, but I've hit a problem. We use a Google Search Appliance to index our MediaWiki installation. The problem I have is that the GSA increments the page view counter each time it crawls the page. This completely dominates the statistics, swamping the views made by real users. I know how to reset the page counters to start again. But

LDAP on local domain with Mediawiki on Debian 10

ぃ、小莉子 提交于 2020-01-14 06:40:12
问题 I have a MediaWiki (1.34) running on a Debian 10 linux VM on our local network. We have a local domain (abc.local) managed by Win Server 2008 R2. I am trying to implement LDAP so only abc.local domain users can use our wiki. I installed all the necessary extensions and everything seems to work when i use this test ldapprovider.json to test. I don't know credentials for this test domain so i get this: This seems to tell me that LDAP is working though and tried to authenticate based on the

LDAP on local domain with Mediawiki on Debian 10

这一生的挚爱 提交于 2020-01-14 06:39:27
问题 I have a MediaWiki (1.34) running on a Debian 10 linux VM on our local network. We have a local domain (abc.local) managed by Win Server 2008 R2. I am trying to implement LDAP so only abc.local domain users can use our wiki. I installed all the necessary extensions and everything seems to work when i use this test ldapprovider.json to test. I don't know credentials for this test domain so i get this: This seems to tell me that LDAP is working though and tried to authenticate based on the

What wiki tools exist to generate shippable user doc from a wiki? [closed]

血红的双手。 提交于 2020-01-14 04:18:16
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I am looking into using a wiki (prefer mediawiki, but not a req.) as the repository for developer generated documentation (User Guides, Release Notes, Application Notes, Errata, etc.) from a collaborative/easy-to-update point of view a wiki seems like a good match, however since this documentation will

Where can I find a good MediaWiki Markup parser in PHP?

▼魔方 西西 提交于 2020-01-13 10:11:25
问题 I would try hacking MediaWiki's code a little, but I figured out it would be unnecessary if I can get an independent parser. Can anyone help me with this? Thanks. 回答1: Ben Hughes is right. It's very difficult to get right, especially if you want to parse real articles from big wikis like Wikipedia itself with 100% accuracy. It is discussed frequently in the wikitech mailing list and no alternative parser has come up with the goods despite many attempts. Firstly it's not really a parser in

Where can I find a good MediaWiki Markup parser in PHP?

╄→гoц情女王★ 提交于 2020-01-13 10:11:09
问题 I would try hacking MediaWiki's code a little, but I figured out it would be unnecessary if I can get an independent parser. Can anyone help me with this? Thanks. 回答1: Ben Hughes is right. It's very difficult to get right, especially if you want to parse real articles from big wikis like Wikipedia itself with 100% accuracy. It is discussed frequently in the wikitech mailing list and no alternative parser has come up with the goods despite many attempts. Firstly it's not really a parser in

Finding subcategories of a wikipedia category using category and categorylinks table

倾然丶 夕夏残阳落幕 提交于 2020-01-13 06:46:18
问题 I downloaded the category and categorylinks table sql.gz files from mediawiki and generated the required tables: category and categorylinks Manual for the tables: CategoryLinks Category Consider the following category page of: NoSQL The parent category of this page is Database and Database management. How could I get this information from the two tables? The manual for category table says the following but I am unable to get that information: " Note: The pages and sub-categories are stored in

Python regex for finding contents of MediaWiki markup links

Deadly 提交于 2020-01-13 05:12:12
问题 If I have some xml containing things like the following mediawiki markup: " ...collected in the 12th century, of which [[Alexander the Great]] was the hero, and in which he was represented, somewhat like the British [[King Arthur|Arthur]]" what would be the appropriate arguments to something like: re.findall([[__?__]], article_entry) I am stumbling a bit on escaping the double square brackets, and getting the proper link for text like: [[Alexander of Paris|poet named Alexander]] 回答1: Here is