Jump to: navigation, search

What We did with Semantic Mediawiki

I work as a mechanical engineer (among a few other roles) for a global food ingredients manufacturer. I installed mediawiki 1.13 on our LAN in 2008 not sure where it would go, only knowing that it could be a path to significantly improving productivity when it came to an evident struggle with knowledge management. I read some article about the 'top 10 mediawiki extensions' near that time which identified SMW as something that could 'add structured data' your wiki. I didn't really understand what that meant. I installed SMW in 2011, SF in 2012, and have been on a very rewarding path structuring some of the vast knowledge at our site in Eastern Ontario, Canada. Here are SOME of thing things we use SMW/SF for:

  • A 'Lock Out Tag Out' repository (database) that structures creation, approval, and display of procedures, all in articles, for equipment isolation before performing maintenance.
  • Display of statistical 'Control Charts' (400+ of them) that harmonize data from 3 different sources and allow them to be displayed together.
  • Useful 'Dashboards' of important live information about processes in the plant.
  • Calculation and display of 'Key Performance Indicators' and visualizations across the site.
  • Electronic 'log sheets' that free us from shuffling papers around all day and make auditing a pleasure.
  • Management of action oriented content articles where we describe efforts or ideas and document their progress or status, containing 70+ properties that allow unprecedented reporting across dynamic areas of interest:
    • Scaled risk assessment
    • Effort alignment
    • Resource (people) allocation
    • Cost Savings Reporting
    • Task lists
    • Schedules, gantt charts
  • Documentation of internal auditing activities that drive corrective action resolution and management KPI's (that will tattle on you if you're late on an activity).
  • Taking of meeting minutes where users are identified specifically as participants, dates, related action item articles, and departments.
  • Event calendars to drive communication of plant activities.
  • Traceability logs identifying what products/packaging was reprocessed, when, by who, and how much. Drives dynamic reporting on costs to the business and makes auditing a snap.
  • Equipment information. While our ERP system handles the dollars and cents, SMW allows for a people side. For documentation of important information related to 1000's of pieces of equipment and expedited access to it.
  • Caching of and display of wiki-computed KPI's like Overall Equipment Effectiveness (OEE)

In all it has been terribly exciting as there have been extremely few problems that can't be solved with a form, template, and a few articles. While software doesn't actually fix anything, rapid access to information drives critical and informed discussion right now. As of 2016 one of the only things we don't use SMW for is document/content management. While it would be a perfect fit, in industry we still cling to a paper mindset when it comes to actual information. We love love love spreadsheets, email, and shared drives. It will take time before we evolve our thinking to centralized community driven knowledge. But it's coming. It has to. Economic laws foresee it.

Autonomous Page Creation with Semantic Forms

Further to discussion here. I wanted to have some kind of way to have certain Semantic Forms execute automatically (like when clicking an #autoedit button) via cron job. The form would then create/update a page with a template reference that would take care of setting up some properties and such on once it was created. I had a similar script from a few years ago we never put into production that used cURL to create articles after authenticating itself. Adjusting it to hit the sfautoedit api instead had the desired effect.

$settings['wikiroot'] = "";
$settings['user'] = "wikiuser";
$settings['pass'] = "wikiuser password";
$settings['cookiefile'] = "cookies.tmp";

function httpRequest($url, $post="") {
        global $settings;
        $ch = curl_init();
        curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20071025 Firefox/');
        curl_setopt($ch, CURLOPT_URL, ($url));
        curl_setopt($ch, CURLOPT_ENCODING, "UTF-8" );
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_COOKIEFILE, $settings['cookiefile']);
        curl_setopt($ch, CURLOPT_COOKIEJAR, $settings['cookiefile']);
        if (!empty($post)) curl_setopt($ch,CURLOPT_POSTFIELDS,$post);
        $xml = curl_exec($ch);
        if (!$xml) {
                throw new Exception("Error getting data from server ($url): " . curl_error($ch));
        return $xml;

function login ($user, $pass, $token='') {
        global $settings;

        $url = $settings['wikiroot'] . "/mediawiki/api.php?action=login&format=xml";

        $params = "action=login&lgname=$user&lgpassword=$pass";
        if (!empty($token)) {
                $params .= "&lgtoken=$token";

        $data = httpRequest($url, $params);
        if (empty($data)) {
                throw new Exception("No data received from server. Check that API is enabled.");

        $xml = simplexml_load_string($data);
        if (!empty($token)) {
                //Check for successful login
                $expr = "/api/login[@result='Success']";
                $result = $xml->xpath($expr);

                if(!count($result)) {
                        throw new Exception("Login failed");
        } else {
                $expr = "/api/login[@token]";
                $result = $xml->xpath($expr);

                if(!count($result)) {
                        throw new Exception("Login token not found in XML");
        return $result[0]->attributes()->token;

function getEditToken()
	global $settings;
	$data = httpRequest($settings['wikiroot'] . "/mediawiki/api.php?action=query&prop=info|revisions&intoken=edit&titles=Main%20Page&format=xml");
	$xml = simplexml_load_string($data);
	$path = "/api/query/pages/page[@edittoken]";
	$result = $xml->xpath($path);
	if(!count($result)) { throw new Exception("Edit token not found in XML"); }
	return $result[0]->attributes()->edittoken;
#Get the forms that will be run (in a pipe delimited list at 'Template:Forms to run').
$F_url = "";
$F_json = file_get_contents($F_url);
$F_data = json_decode($F_json, TRUE);
$F_array= array();

#If there was output, explode on the pipes, load them into an array and loop through them.
if (!empty($F_data['expandtemplates']['*'])) {
	$F_array = explode ('|',$F_data['expandtemplates']['*']);
	foreach ( $F_array as $form ) {
		// Login to MediaWiki and hit the forms. 
		try {
				global $settings;
				$token = login($settings['user'], $settings['pass']);
				login($settings['user'], $settings['pass'], $token);
				$etoken = getEditToken();
				httpRequest($settings['wikiroot'] . "/mediawiki/api.php?action=sfautoedit&form=".$form );				
		catch (Exception $e) {
				die("FAILED: " . $e->getMessage());