If your PHP is configured as a CGI plug-in rather than an Apache module you may
experience problems, as this configuration is not well tested. safe_mode is also
-not tested and unlikely to work.
+not tested and unlikely to work.
If you want math support see the instructions in math/README
******************* WARNING *******************
-REMEMBER: ALWAYS BACK UP YOUR DATABASE BEFORE
+REMEMBER: ALWAYS BACK UP YOUR DATABASE BEFORE
ATTEMPTING TO INSTALL OR UPGRADE!!!
******************* WARNING *******************
-For system requirements, installation and upgrade details, see the files
+For system requirements, installation and upgrade details, see the files
RELEASE-NOTES, INSTALL, and UPGRADE.
== MediaWiki ==
released into the public domain, which does not impair the obligations of users
under the GPL for use of the whole code or other sections thereof.
-MediaWiki makes use of the Sajax Toolkit by modernmethod,
+MediaWiki makes use of the Sajax Toolkit by modernmethod,
http://www.modernmethod.com/sajax/ which has the following license:
'This work is licensed under the Creative Commons Attribution
* } else {
* $wgProfiler['class'] = 'ProfilerStub';
* }
- *
+ *
* Configuration of the profiler output can be done in LocalSettings.php
*/
==== From the web ====
-If you browse to the web-based installation script (usually at
-/mw-config/index.php) from your wiki installation you can follow the script and
+If you browse to the web-based installation script (usually at
+/mw-config/index.php) from your wiki installation you can follow the script and
upgrade your database in place.
==== From the command line ====
If you absolutely cannot make the UTF-8 upgrade work, you can try
doing it by hand: dump your old database, convert the dump file
-using iconv as described here:
+using iconv as described here:
http://portal.suse.com/sdb/en/2004/05/jbartsh_utf-8.html
and then reimport it. You can also convert filenames using convmv,
but note that the old directory hashes will no longer be valid,
* $wgSharedPrefix is the table prefix for the shared database. It defaults to
* $wgDBprefix.
*
- * @deprecated In new code, use the $wiki parameter to wfGetLB() to access
- * remote databases. Using wfGetLB() allows the shared database to reside on
- * separate servers to the wiki's own database, with suitable configuration
+ * @deprecated In new code, use the $wiki parameter to wfGetLB() to access
+ * remote databases. Using wfGetLB() allows the shared database to reside on
+ * separate servers to the wiki's own database, with suitable configuration
* of $wgLBFactoryConf.
*/
$wgSharedDB = null;
/**
* Whether to use a Host header in purge requests sent to the proxy servers
- * configured in $wgSquidServers. Set this to false to support Squid
+ * configured in $wgSquidServers. Set this to false to support Squid
* configured in forward-proxy mode.
*
* If this is set to true, a Host header will be sent, and only the path
* component of the URL will appear on the request line, as if the request
- * were a non-proxy HTTP 1.1 request. Varnish only supports this style of
+ * were a non-proxy HTTP 1.1 request. Varnish only supports this style of
* request. Squid supports this style of request only if reverse-proxy mode
* (http_port ... accel) is enabled.
*
* will be sent in the request line, as is the standard for an HTTP proxy
* request in both HTTP 1.0 and 1.1. This style of request is not supported
* by Varnish, but is supported by Squid in either configuration (forward or
- * reverse).
+ * reverse).
*
* @since 1.21
*/
*
* @file
*/
-
+
/**
- * Base class for general text storage via the "object" flag in old_flags, or
- * two-part external storage URLs. Used for represent efficient concatenated
+ * Base class for general text storage via the "object" flag in old_flags, or
+ * two-part external storage URLs. Used for represent efficient concatenated
* storage, and migration-related pointer objects.
*/
interface HistoryBlob
* @return bool
*/
public function isHappy() {
- return $this->mSize < $this->mMaxSize
+ return $this->mSize < $this->mMaxSize
&& count( $this->mItems ) < $this->mMaxCount;
}
}
/** Total uncompressed size */
var $mSize = 0;
- /**
- * Array of diffs. If a diff D from A to B is notated D = B - A, and Z is
+ /**
+ * Array of diffs. If a diff D from A to B is notated D = B - A, and Z is
* an empty string:
*
* { item[map[i]] - item[map[i-1]] where i > 0
- * diff[i] = {
+ * diff[i] = {
* { item[map[i]] - Z where i = 0
*/
var $mDiffs;
* The maximum number of text items before the object becomes sad
*/
var $mMaxCount = 100;
-
+
/** Constants from xdiff.h */
const XDL_BDOP_INS = 1;
const XDL_BDOP_CPY = 2;
* @throws MWException
*/
function compress() {
- if ( !function_exists( 'xdiff_string_rabdiff' ) ){
+ if ( !function_exists( 'xdiff_string_rabdiff' ) ){
throw new MWException( "Need xdiff 1.5+ support to write DiffHistoryBlob\n" );
}
if ( isset( $this->mDiffs ) ) {
# Pure PHP implementation
$header = unpack( 'Vofp/Vcsize', substr( $diff, 0, 8 ) );
-
+
# Check the checksum if hash/mhash is available
$ofp = $this->xdiffAdler32( $base );
if ( $ofp !== false && $ofp !== substr( $diff, 0, 4 ) ) {
wfDebug( __METHOD__. ": incorrect base length\n" );
return false;
}
-
+
$p = 8;
$out = '';
while ( $p < strlen( $diff ) ) {
}
/**
- * Compute a binary "Adler-32" checksum as defined by LibXDiff, i.e. with
+ * Compute a binary "Adler-32" checksum as defined by LibXDiff, i.e. with
* the bytes backwards and initialised with 0 instead of 1. See bug 34428.
*
* Returns false if no hashing library is available
if ( $init === null ) {
$init = str_repeat( "\xf0", 205 ) . "\xee" . str_repeat( "\xf0", 67 ) . "\x02";
}
- // The real Adler-32 checksum of $init is zero, so it initialises the
- // state to zero, as it is at the start of LibXDiff's checksum
+ // The real Adler-32 checksum of $init is zero, so it initialises the
+ // state to zero, as it is at the start of LibXDiff's checksum
// algorithm. Appending the subject string then simulates LibXDiff.
if ( function_exists( 'hash' ) ) {
$hash = hash( 'adler32', $init . $s, true );
if ( isset( $info['base'] ) ) {
// Old format
$this->mDiffMap = range( 0, count( $this->mDiffs ) - 1 );
- array_unshift( $this->mDiffs,
+ array_unshift( $this->mDiffs,
pack( 'VVCV', 0, 0, self::XDL_BDOP_INSB, strlen( $info['base'] ) ) .
$info['base'] );
} else {
* @return bool
*/
function isHappy() {
- return $this->mSize < $this->mMaxSize
+ return $this->mSize < $this->mMaxSize
&& count( $this->mItems ) < $this->mMaxCount;
}
'class', // html4, html5
'accesskey', // as of html5, multiple space-separated values allowed
// html4-spec doesn't document rel= as space-separated
- // but has been used like that and is now documented as such
+ // but has been used like that and is now documented as such
// in the html5-spec.
'rel',
);
// values. Implode/explode to get those into the main array as well.
if ( is_array( $value ) ) {
// If input wasn't an array, we can skip this step
-
$newValue = array();
foreach ( $value as $k => $v ) {
if ( is_string( $v ) ) {
# @todo FIXME: Is this really true?
$map['<'] = '<';
}
-
$ret .= " $key=$quote" . strtr( $value, $map ) . $quote;
}
}
/**
* Some functions to help implement an external link filter for spam control.
- *
+ *
* @todo implement the filter. Currently these are just some functions to help
* maintenance/cleanupSpam.php remove links to a single specified domain. The
* next thing is to implement functions for checking a given page against a big
// Reverse the labels in the hostname, convert to lower case
// For emails reverse domainpart only
if ( $prot == 'mailto:' && strpos($host, '@') ) {
- // complete email adress
+ // complete email adress
$mailparts = explode( '@', $host );
$domainpart = strtolower( implode( '.', array_reverse( explode( '.', $mailparts[1] ) ) ) );
$host = $domainpart . '@' . $mailparts[0];
$like = array( "$prot$host", $db->anyString() );
} elseif ( $prot == 'mailto:' ) {
// domainpart of email adress only. do not add '.'
- $host = strtolower( implode( '.', array_reverse( explode( '.', $host ) ) ) );
- $like = array( "$prot$host", $db->anyString() );
+ $host = strtolower( implode( '.', array_reverse( explode( '.', $host ) ) ) );
+ $like = array( "$prot$host", $db->anyString() );
} else {
- $host = strtolower( implode( '.', array_reverse( explode( '.', $host ) ) ) );
+ $host = strtolower( implode( '.', array_reverse( explode( '.', $host ) ) ) );
if ( substr( $host, -1, 1 ) !== '.' ) {
$host .= '.';
}
*/
public static function buildRollbackLink( $rev, IContextSource $context = null ) {
global $wgShowRollbackEditCount, $wgMiserMode;
-
+
// To config which pages are effected by miser mode
$disableRollbackEditCountSpecialPage = array( 'Recentchanges', 'Watchlist' );
*/
static function makeBrokenLink( $title, $text = '', $query = '', $trail = '' ) {
wfDeprecated( __METHOD__, '1.16' );
-
+
$nt = Title::newFromText( $title );
if ( $nt instanceof Title ) {
return self::makeBrokenLinkObj( $nt, $text, $query, $trail );
*/
static function makeLinkObj( $nt, $text = '', $query = '', $trail = '', $prefix = '' ) {
# wfDeprecated( __METHOD__, '1.16' ); // See r105985 and it's revert. Somewhere still used.
-
+
wfProfileIn( __METHOD__ );
$query = wfCgiToArray( $query );
list( $inside, $trail ) = self::splitTrail( $trail );
$title, $text = '', $query = '', $trail = '', $prefix = '' , $aprops = '', $style = ''
) {
# wfDeprecated( __METHOD__, '1.16' ); // See r105985 and it's revert. Somewhere still used.
-
+
wfProfileIn( __METHOD__ );
if ( $text == '' ) {
*/
static function makeBrokenLinkObj( $title, $text = '', $query = '', $trail = '', $prefix = '' ) {
wfDeprecated( __METHOD__, '1.16' );
-
+
wfProfileIn( __METHOD__ );
list( $inside, $trail ) = self::splitTrail( $trail );
*/
static function makeColouredLinkObj( $nt, $colour, $text = '', $query = '', $trail = '', $prefix = '' ) {
wfDeprecated( __METHOD__, '1.16' );
-
+
if ( $colour != '' ) {
$style = self::getInternalLinkAttributesObj( $nt, $text, $colour );
} else {
* This is used as a fallback to mime.types files.
* An extensive list of well known mime types is provided by
* the file mime.types in the includes directory.
- *
+ *
* This list concatenated with mime.types is used to create a mime <-> ext
* map. Each line contains a mime type followed by a space separated list of
- * extensions. If multiple extensions for a single mime type exist or if
+ * extensions. If multiple extensions for a single mime type exist or if
* multiple mime types exist for a single extension then in most cases
* MediaWiki assumes that the first extension following the mime type is the
* canonical extension, and the first time a mime type appears for a certain
* extension is considered the canonical mime type.
- *
+ *
* (Note that appending $wgMimeTypeFile to the end of MM_WELL_KNOWN_MIME_TYPES
- * sucks because you can't redefine canonical types. This could be fixed by
+ * sucks because you can't redefine canonical types. This could be fixed by
* appending MM_WELL_KNOWN_MIME_TYPES behind $wgMimeTypeFile, but who knows
* what will break? In practice this probably isn't a problem anyway -- Bryan)
*/
image/gif gif
image/jpeg jpeg jpg jpe
image/png png
-image/svg+xml svg
+image/svg+xml svg
image/svg svg
image/tiff tiff tif
image/vnd.djvu djvu
return self::$instance;
}
- /**
- * Returns a list of file extensions for a given mime type as a space
+ /**
+ * Returns a list of file extensions for a given mime type as a space
* separated string or null if the mime type was unrecognized. Resolves
* mime type aliases.
- *
+ *
* @param $mime string
* @return string|null
*/
return null;
}
- /**
- * Returns a list of mime types for a given file extension as a space
+ /**
+ * Returns a list of mime types for a given file extension as a space
* separated string or null if the extension was unrecognized.
- *
+ *
* @param $ext string
* @return string|null
*/
return $r;
}
- /**
+ /**
* Returns a single mime type for a given file extension or null if unknown.
* This is always the first type from the list returned by getTypesForExtension($ext).
- *
+ *
* @param $ext string
* @return string|null
*/
}
- /**
- * Tests if the extension matches the given mime type. Returns true if a
- * match was found, null if the mime type is unknown, and false if the
+ /**
+ * Tests if the extension matches the given mime type. Returns true if a
+ * match was found, null if the mime type is unknown, and false if the
* mime type is known but no matches where found.
- *
+ *
* @param $extension string
* @param $mime string
* @return bool|null
return in_array( $extension, $ext );
}
- /**
- * Returns true if the mime type is known to represent an image format
+ /**
+ * Returns true if the mime type is known to represent an image format
* supported by the PHP GD library.
*
* @param $mime string
- *
+ *
* @return bool
*/
public function isPHPImageType( $mime ) {
return in_array( strtolower( $extension ), $types );
}
- /**
+ /**
* Improves a mime type using the file extension. Some file formats are very generic,
- * so their mime type is not very meaningful. A more useful mime type can be derived
- * by looking at the file extension. Typically, this method would be called on the
+ * so their mime type is not very meaningful. A more useful mime type can be derived
+ * by looking at the file extension. Typically, this method would be called on the
* result of guessMimeType().
- *
+ *
* Currently, this method does the following:
*
* If $mime is "unknown/unknown" and isRecognizableExtension( $ext ) returns false,
- * return the result of guessTypesForExtension($ext).
+ * return the result of guessTypesForExtension($ext).
*
* If $mime is "application/x-opc+zip" and isMatchingExtension( $ext, $mime )
- * gives true, return the result of guessTypesForExtension($ext).
+ * gives true, return the result of guessTypesForExtension($ext).
*
* @param $mime String: the mime type, typically guessed from a file's content.
* @param $ext String: the file extension, as taken from the file name
public function improveTypeFromExtension( $mime, $ext ) {
if ( $mime === 'unknown/unknown' ) {
if ( $this->isRecognizableExtension( $ext ) ) {
- wfDebug( __METHOD__. ': refusing to guess mime type for .' .
+ wfDebug( __METHOD__. ': refusing to guess mime type for .' .
"$ext file, we should have recognized it\n" );
} else {
- // Not something we can detect, so simply
+ // Not something we can detect, so simply
// trust the file extension
$mime = $this->guessTypesForExtension( $ext );
}
// find the proper mime type for that file extension
$mime = $this->guessTypesForExtension( $ext );
} else {
- wfDebug( __METHOD__. ": refusing to guess better type for $mime file, " .
+ wfDebug( __METHOD__. ": refusing to guess better type for $mime file, " .
".$ext is not a known OPC extension.\n" );
$mime = 'application/zip';
}
return $mime;
}
- /**
- * Mime type detection. This uses detectMimeType to detect the mime type
- * of the file, but applies additional checks to determine some well known
- * file formats that may be missed or misinterpreter by the default mime
- * detection (namely XML based formats like XHTML or SVG, as well as ZIP
+ /**
+ * Mime type detection. This uses detectMimeType to detect the mime type
+ * of the file, but applies additional checks to determine some well known
+ * file formats that may be missed or misinterpreter by the default mime
+ * detection (namely XML based formats like XHTML or SVG, as well as ZIP
* based formats like OPC/ODF files).
*
* @param $file String: the file to check
* @param $ext Mixed: the file extension, or true (default) to extract it from the filename.
- * Set it to false to ignore the extension. DEPRECATED! Set to false, use
+ * Set it to false to ignore the extension. DEPRECATED! Set to false, use
* improveTypeFromExtension($mime, $ext) later to improve mime type.
*
* @return string the mime type of $file
// @todo FIXME: Shouldn't this be rb?
$f = fopen( $file, 'rt' );
wfRestoreWarnings();
-
+
if( !$f ) {
return 'unknown/unknown';
}
return false;
}
-
+
/**
* Detect application-specific file type of a given ZIP file from its
* header data. Currently works for OpenDocument and OpenXML types...
* @param $header String: some reasonably-sized chunk of file header
* @param $tail String: the tail of the file
* @param $ext Mixed: the file extension, or true to extract it from the filename.
- * Set it to false (default) to ignore the extension. DEPRECATED! Set to false,
+ * Set it to false (default) to ignore the extension. DEPRECATED! Set to false,
* use improveTypeFromExtension($mime, $ext) later to improve mime type.
*
* @return string
wfDebug( __METHOD__.": detected $mime from ZIP archive\n" );
} elseif ( preg_match( $openxmlRegex, substr( $header, 30 ) ) ) {
$mime = "application/x-opc+zip";
- # TODO: remove the block below, as soon as improveTypeFromExtension is used everywhere
- if ( $ext !== true && $ext !== false ) {
+ # TODO: remove the block below, as soon as improveTypeFromExtension is used everywhere
+ if ( $ext !== true && $ext !== false ) {
/** This is the mode used by getPropsFromPath
* These mime's are stored in the database, where we don't really want
* x-opc+zip, because we use it only for internal purposes
}
}
wfDebug( __METHOD__.": detected an Open Packaging Conventions archive: $mime\n" );
- } elseif ( substr( $header, 0, 8 ) == "\xd0\xcf\x11\xe0\xa1\xb1\x1a\xe1" &&
+ } elseif ( substr( $header, 0, 8 ) == "\xd0\xcf\x11\xe0\xa1\xb1\x1a\xe1" &&
($headerpos = strpos( $tail, "PK\x03\x04" ) ) !== false &&
preg_match( $openxmlRegex, substr( $tail, $headerpos + 30 ) ) ) {
if ( substr( $header, 512, 4) == "\xEC\xA5\xC1\x00" ) {
$mime = "application/msword";
- }
+ }
switch( substr( $header, 512, 6) ) {
case "\xEC\xA5\xC1\x00\x0E\x00":
case "\xEC\xA5\xC1\x00\x1C\x00":
return $mime;
}
- /**
- * Internal mime type detection. Detection is done using an external
- * program, if $wgMimeDetectorCommand is set. Otherwise, the fileinfo
- * extension and mime_content_type are tried (in this order), if they
- * are available. If the dections fails and $ext is not false, the mime
+ /**
+ * Internal mime type detection. Detection is done using an external
+ * program, if $wgMimeDetectorCommand is set. Otherwise, the fileinfo
+ * extension and mime_content_type are tried (in this order), if they
+ * are available. If the dections fails and $ext is not false, the mime
* type is guessed from the file extension, using guessTypesForExtension.
- *
- * If the mime type is still unknown, getimagesize is used to detect the
- * mime type if the file is an image. If no mime type can be determined,
+ *
+ * If the mime type is still unknown, getimagesize is used to detect the
+ * mime type if the file is an image. If no mime type can be determined,
* this function returns 'unknown/unknown'.
*
* @param $file String: the file to check
* @param $ext Mixed: the file extension, or true (default) to extract it from the filename.
- * Set it to false to ignore the extension. DEPRECATED! Set to false, use
+ * Set it to false to ignore the extension. DEPRECATED! Set to false, use
* improveTypeFromExtension($mime, $ext) later to improve mime type.
*
* @return string the mime type of $file
return $type;
}
- /**
+ /**
* Returns a media code matching the given mime type or file extension.
* File extensions are represented by a string starting with a dot (.) to
* distinguish them from mime types.
* @return int|string
*/
function findMediaType( $extMime ) {
- if ( strpos( $extMime, '.' ) === 0 ) {
+ if ( strpos( $extMime, '.' ) === 0 ) {
// If it's an extension, look up the mime types
$m = $this->getTypesForExtension( substr( $extMime, 1 ) );
if ( !$m ) {
}
/**
- * Get the MIME types that various versions of Internet Explorer would
+ * Get the MIME types that various versions of Internet Explorer would
* detect from a chunk of the content.
*
* @param $fileName String: the file name (unused at present)
// this is a fallback SQL file
$testSqlFile = false;
$testImageZip = false;
-
+
// if we find a request parameter containing the test name, set a cookie with the test name
if ( isset( $_GET['setupTestSuite'] ) ) {
$setupTestSuiteName = $_GET['setupTestSuite'];
true
);
}
-
+
$testIncludes = array(); // array containing all the includes needed for this test
$testGlobalConfigs = array(); // an array containg all the global configs needed for this test
$testResourceFiles = array(); // an array containing all the resource files needed for this test
if ( isset( $testResourceFiles['images'] ) ) {
$testImageZip = $testResourceFiles['images'];
}
-
+
if ( isset( $testResourceFiles['db'] ) ) {
$testSqlFile = $testResourceFiles['db'];
$testResourceName = getTestResourceNameFromTestSuiteName( $setupTestSuiteName );
-
+
switchToTestResources( $testResourceName, false ); // false means do not switch database yet
setupTestResources( $testResourceName, $testSqlFile, $testImageZip );
}
if ( isset( $_GET['clearTestSuite'] ) ) {
$testSuiteName = getTestSuiteNameFromCookie( $cookieName );
- $expire = time() - 600;
+ $expire = time() - 600;
setcookie(
$cookieName,
'',
$wgCookieSecure,
true
);
-
+
$testResourceName = getTestResourceNameFromTestSuiteName( $testSuiteName );
teardownTestResources( $testResourceName );
}
// if a cookie is found, run the appropriate callback to get the config params.
-if ( isset( $_COOKIE[$cookieName] ) ) {
+if ( isset( $_COOKIE[$cookieName] ) ) {
$testSuiteName = getTestSuiteNameFromCookie( $cookieName );
if ( !isset( $wgSeleniumTestConfigs[$testSuiteName] ) ) {
return;
}
-
+
$testIncludes = array(); // array containing all the includes needed for this test
$testGlobalConfigs = array(); // an array containg all the global configs needed for this test
$testResourceFiles = array(); // an array containing all the resource files needed for this test
- $callback = $wgSeleniumTestConfigs[$testSuiteName];
+ $callback = $wgSeleniumTestConfigs[$testSuiteName];
call_user_func_array( $callback, array( &$testIncludes, &$testGlobalConfigs, &$testResourceFiles));
if ( isset( $testResourceFiles['db'] ) ) {
require_once( $file );
}
foreach ( $testGlobalConfigs as $key => $value ) {
- if ( is_array( $value ) ) {
+ if ( is_array( $value ) ) {
$GLOBALS[$key] = array_merge( $GLOBALS[$key], $value );
-
} else {
$GLOBALS[$key] = $value;
}
if ( $testResourceName == '' ) {
die( 'Cannot identify a test the resources should be installed for.' );
}
-
+
// create tables
$dbw = wfGetDB( DB_MASTER );
$dbw->query( 'DROP DATABASE IF EXISTS ' . $testResourceName );
}
/**
- * Get the error message as HTML. This is done by parsing the wikitext error
+ * Get the error message as HTML. This is done by parsing the wikitext error
* message.
*/
public function getHTML( $shortContext = false, $longContext = false ) {
$linkCache = LinkCache::singleton();
$cached = $linkCache->getGoodLinkFieldObj( $this, 'redirect' );
- if ( $cached === null ) {
+ if ( $cached === null ) {
// TODO: check the assumption that the cache actually knows about this title
// and handle this, such as get the title from the database.
// See https://bugzilla.wikimedia.org/show_bug.cgi?id=37209
}
/**
- * Add this existing user object to the database. If the user already
- * exists, a fatal status object is returned, and the user object is
+ * Add this existing user object to the database. If the user already
+ * exists, a fatal status object is returned, and the user object is
* initialised with the data from the database.
*
* Previously, this function generated a DB error due to a key conflict
* }
* // do something with $user...
*
- * However, this was vulnerable to a race condition (bug 16020). By
+ * However, this was vulnerable to a race condition (bug 16020). By
* initialising the user object if the user exists, we aim to support this
* calling sequence as far as possible.
*
* Note that if the user exists, this function will acquire a write lock,
- * so it is still advisable to make the call conditional on isLoggedIn(),
+ * so it is still advisable to make the call conditional on isLoggedIn(),
* and to commit the transaction after calling.
*
* @return Status
array( 'IGNORE' )
);
if ( !$dbw->affectedRows() ) {
- $this->mId = $dbw->selectField( 'user', 'user_id',
+ $this->mId = $dbw->selectField( 'user', 'user_id',
array( 'user_name' => $this->mName ), __METHOD__ );
$loaded = false;
if ( $this->mId ) {
$data = $this->getResultData();
foreach ( $data['query']['pages'] as $pageid => $arr ) {
- if ( !isset( $arr['imagerepository'] ) ) {
+ if ( is_array( $arr ) && !isset( $arr['imagerepository'] ) ) {
$result->addValue(
array( 'query', 'pages', $pageid ),
'imagerepository', ''
* 'DBprefix' => '',
* );
*
- * All fields must be present. These mean the same things as $wgDBtype,
- * $wgDBserver, etc. This implementation is quite crude; it could easily
- * support multiple database servers, for instance, and memcached, and it
- * probably has bugs. Kind of hard to reuse code when things might rely on who
+ * All fields must be present. These mean the same things as $wgDBtype,
+ * $wgDBserver, etc. This implementation is quite crude; it could easily
+ * support multiple database servers, for instance, and memcached, and it
+ * probably has bugs. Kind of hard to reuse code when things might rely on who
* knows what configuration globals.
*
- * If either wiki uses the UserComparePasswords hook, password authentication
- * might fail unexpectedly unless they both do the exact same validation.
- * There may be other corner cases like this where this will fail, but it
+ * If either wiki uses the UserComparePasswords hook, password authentication
+ * might fail unexpectedly unless they both do the exact same validation.
+ * There may be other corner cases like this where this will fail, but it
* should be unlikely.
*
* @ingroup ExternalUser
* @return bool
*/
protected function initFromName( $name ) {
- # We might not need the 'usable' bit, but let's be safe. Theoretically
- # this might return wrong results for old versions, but it's probably
+ # We might not need the 'usable' bit, but let's be safe. Theoretically
+ # this might return wrong results for old versions, but it's probably
# good enough.
$name = User::getCanonicalName( $name, 'usable' );
}
public function authenticate( $password ) {
- # This might be wrong if anyone actually uses the UserComparePasswords hook
+ # This might be wrong if anyone actually uses the UserComparePasswords hook
# (on either end), so don't use this if you those are incompatible.
return User::comparePasswords( $this->mRow->user_password, $password,
- $this->mRow->user_id );
+ $this->mRow->user_id );
}
public function getPref( $pref ) {
- # @todo FIXME: Return other prefs too. Lots of global-riddled code that does
+ # @todo FIXME: Return other prefs too. Lots of global-riddled code that does
# this normally.
if ( $pref === 'emailaddress'
&& $this->row->user_email_authenticated !== null ) {
/**
* Guess the MIME type from the file contents alone
- *
- * @return string
+ *
+ * @return string
*/
public function getMimeType() {
return MimeMagic::singleton()->guessMimeType( $this->path, false );
/**
* Get the final file extension from a file system path
- *
+ *
* @param $path string
* @return string
*/
/**
* A repository for files accessible via the local filesystem.
* Does not support database access or registration.
- *
+ *
* This is a mostly a legacy class. New uses should not be added.
- *
+ *
* @ingroup FileRepo
* @deprecated since 1.19
*/
/**
* Get a key on the primary cache for this repository.
- * Returns false if the repository's cache is not accessible at this site.
+ * Returns false if the repository's cache is not accessible at this site.
* The parameters are the parts of the key, as for wfMemcKey().
* @return bool|mixed
*/
/**
* Get a key on the primary cache for this repository.
- * Returns false if the repository's cache is not accessible at this site.
+ * Returns false if the repository's cache is not accessible at this site.
* The parameters are the parts of the key, as for wfMemcKey().
* @return bool|string
*/
__METHOD__,
array( 'ORDER BY' => 'img_name' )
);
-
+
$result = array();
foreach ( $res as $row ) {
$result[] = $this->newFileFromRow( $row );
/**
* Get a key on the primary cache for this repository.
- * Returns false if the repository's cache is not accessible at this site.
+ * Returns false if the repository's cache is not accessible at this site.
* The parameters are the parts of the key, as for wfMemcKey().
*
* @return string
const TYPE_DO = 15; // keywords: case, var, finally, else, do, try
const TYPE_FUNC = 16; // keywords: function
const TYPE_LITERAL = 17; // all literals, identifiers and unrecognised tokens
-
+
// Sanity limit to avoid excessive memory usage
const STACK_LIMIT = 1000;
self::TYPE_LITERAL => true
)
);
-
+
// Rules for when newlines should be inserted if
// $statementsOnOwnLine is enabled.
// $newlineBefore is checked before switching state,
return self::parseError($s, $end, 'Number with several E' );
}
$end++;
-
+
// + sign is optional; - sign is required.
$end += strspn( $s, '-+', $end );
$len = strspn( $s, '0123456789', $end );
$out .= ' ';
$lineLength++;
}
-
+
$out .= $token;
$lineLength += $end - $pos; // += strlen( $token )
$last = $s[$end - 1];
$pos = $end;
$newlineFound = false;
-
+
// Output a newline after the token if required
// This is checked before AND after switching state
$newlineAdded = false;
} elseif( isset( $goto[$state][$type] ) ) {
$state = $goto[$state][$type];
}
-
+
// Check for newline insertion again
if ( $statementsOnOwnLine && !$newlineAdded && isset( $newlineAfter[$state][$type] ) ) {
$out .= "\n";
}
return $out;
}
-
+
static function parseError($fullJavascript, $position, $errorMsg) {
// TODO: Handle the error: trigger_error, throw exception, return false...
return false;
unset( $baseArray['comment'] );
unset( $baseArray['xmp'] );
-
+
$baseArray['metadata'] = $meta->getMetadataArray();
$baseArray['metadata']['_MW_GIF_VERSION'] = GIFMetadataExtractor::VERSION;
return $baseArray;
function retrieveMetaData() {
global $wgDjvuToXML, $wgDjvuDump, $wgDjvuTxt;
wfProfileIn( __METHOD__ );
-
+
if ( isset( $wgDjvuDump ) ) {
# djvudump is faster as of version 3.5
# http://sourceforge.net/tracker/index.php?func=detail&aid=1704049&group_id=32953&atid=406583
$xml = null;
}
# Text layer
- if ( isset( $wgDjvuTxt ) ) {
+ if ( isset( $wgDjvuTxt ) ) {
wfProfileIn( 'djvutxt' );
$cmd = wfEscapeShellArg( $wgDjvuTxt ) . ' --detail=page ' . wfEscapeShellArg( $this->mFilename ) ;
wfDebug( __METHOD__.": $cmd\n" );
$reg = <<<EOR
/\(page\s[\d-]*\s[\d-]*\s[\d-]*\s[\d-]*\s*"
((?> # Text to match is composed of atoms of either:
- \\\\. # - any escaped character
+ \\\\. # - any escaped character
| # - any character different from " and \
[^"\\\\]+
)*?)
$this->charCodeString( 'UserComment' );
$this->charCodeString( 'GPSProcessingMethod');
$this->charCodeString( 'GPSAreaInformation' );
-
+
//ComponentsConfiguration should really be an array instead of a string...
//This turns a string of binary numbers into an array of numbers.
$ccVals['_type'] = 'ol'; //this is for formatting later.
$this->mFilteredExifData['ComponentsConfiguration'] = $ccVals;
}
-
+
//GPSVersion(ID) is treated as the wrong type by php exif support.
//Go through each byte turning it into a version string.
//For example: "\x02\x02\x00\x00" -> "2.2.0.0"
}
$charCode = substr( $this->mFilteredExifData[$prop], 0, 8);
$val = substr( $this->mFilteredExifData[$prop], 8);
-
-
+
switch ($charCode) {
case "\x4A\x49\x53\x00\x00\x00\x00\x00":
//JIS
wfRestoreWarnings();
}
}
-
+
//trim and check to make sure not only whitespace.
$val = trim($val);
if ( strlen( $val ) === 0 ) {
return false;
}
if( $count > 1 ) {
- foreach( $val as $v ) {
+ foreach( $val as $v ) {
if( !$this->validate( $section, $tag, $v, true ) ) {
- return false;
- }
+ return false;
+ }
}
return true;
}
class GIFHandler extends BitmapHandler {
const BROKEN_FILE = '0'; // value to store in img_metadata if error extracting metadata.
-
+
function getMetadata( $image, $filename ) {
try {
$parsedGIFMetadata = BitmapMetadataHandler::GIF( $filename );
wfSuppressWarnings();
$metadata = unserialize($image->getMetadata());
wfRestoreWarnings();
-
+
if (!$metadata || $metadata['frameCount'] <= 1) {
return $original;
}
/* Preserve original image info string, but strip the last char ')' so we can add even more */
$info = array();
$info[] = $original;
-
+
if ( $metadata['looped'] ) {
$info[] = wfMessage( 'file-info-gif-looped' )->parse();
}
-
+
if ( $metadata['frameCount'] > 1 ) {
$info[] = wfMessage( 'file-info-gif-frames' )->numParams( $metadata['frameCount'] )->parse();
}
-
+
if ( $metadata['duration'] ) {
$info[] = $wgLang->formatTimePeriod( $metadata['duration'] );
}
-
+
return $wgLang->commaList( $info );
}
}
$isLooped = false;
$xmp = "";
$comment = array();
-
+
if ( !$filename ) {
throw new Exception( "No file name specified" );
} elseif ( !file_exists( $filename ) || is_dir( $filename ) ) {
## Read GCT
self::readGCT( $fh, $bpp );
fread( $fh, 1 );
- self::skipBlock( $fh );
+ self::skipBlock( $fh );
} elseif ( $buf == self::$gif_extension_sep ) {
$buf = fread( $fh, 1 );
if ( strlen( $buf ) < 1 ) throw new Exception( "Ran out of input" );
// NETSCAPE2.0 (application name for animated gif)
if ( $data == 'NETSCAPE2.0' ) {
-
$data = fread( $fh, 2 ); // Block length and introduction, should be 03 01
if ($data != "\x03\x01") {
throw new Exception( "Expected \x03\x01, got $data" );
}
-
+
// Unsigned little-endian integer, loop count or zero for "forever"
$loopData = fread( $fh, 2 );
if ( strlen( $loopData ) < 2 ) throw new Exception( "Ran out of input" );
$loopData = unpack( 'v', $loopData );
$loopCount = $loopData[1];
-
+
if ($loopCount != 1) {
$isLooped = true;
}
-
+
// Read out terminator byte
fread( $fh, 1 );
} elseif ( $data == 'XMP DataXMP' ) {
// whatever...
$segments["XMP"] = substr( $temp, 29 );
wfDebug( __METHOD__ . ' Found XMP section with wrong app identifier '
- . "Using anyways.\n" );
+ . "Using anyways.\n" );
} elseif ( substr( $temp, 0, 6 ) === "Exif\0\0" ) {
// Just need to find out what the byte order is.
// because php's exif plugin sucks...
function canAnimateThumbnail( $image ) {
return false;
}
-
+
function getMetadataType( $image ) {
return 'parsed-png';
}
-
+
function isMetadataValid( $image, $metadata ) {
if ( $metadata === self::BROKEN_FILE ) {
$info = array();
$info[] = $original;
-
+
if ( $metadata['loopCount'] == 0 ) {
$info[] = wfMessage( 'file-info-png-looped' )->parse();
} elseif ( $metadata['loopCount'] > 1 ) {
$info[] = wfMessage( 'file-info-png-repeat' )->numParams( $metadata['loopCount'] )->parse();
}
-
+
if ( $metadata['frameCount'] > 0 ) {
$info[] = wfMessage( 'file-info-png-frames' )->numParams( $metadata['frameCount'] )->parse();
}
-
+
if ( $metadata['duration'] ) {
$info[] = $wgLang->formatTimePeriod( $metadata['duration'] );
}
-
+
return $wgLang->commaList( $info );
}
case 0:
$colorType = 'greyscale';
break;
- case 2:
+ case 2:
$colorType = 'truecolour';
break;
case 3:
$this->reader->open( $source, null, LIBXML_NOERROR | LIBXML_NOWARNING );
}
- // Expand entities, since Adobe Illustrator uses them for xmlns
- // attributes (bug 31719). Note that libxml2 has some protection
- // against large recursive entity expansions so this is not as
+ // Expand entities, since Adobe Illustrator uses them for xmlns
+ // attributes (bug 31719). Note that libxml2 has some protection
+ // against large recursive entity expansions so this is not as
// insecure as it might appear to be.
$this->reader->setParserProperty( XMLReader::SUBST_ENTITIES, true );
* 'validate' => 'validateClosed',
* 'choices' => array( '1' => true, '2' => true ),
* ),
- */
+ */
),
'http://ns.adobe.com/exif/1.0/aux/' => array(
'Lens' => array(
//check if its in a numeric range
$inRange = false;
- if ( isset( $info['rangeLow'] )
+ if ( isset( $info['rangeLow'] )
&& isset( $info['rangeHigh'] )
&& is_numeric( $val )
&& ( intval( $val ) <= $info['rangeHigh'] )
}
$m = array();
- if ( preg_match(
+ if ( preg_match(
'/(\d{1,3}),(\d{1,2}),(\d{1,2})([NWSE])/D',
$val, $m )
) {
}
$val = $coord;
return;
- } elseif ( preg_match(
+ } elseif ( preg_match(
'/(\d{1,3}),(\d{1,2}(?:.\d*)?)([NWSE])/D',
$val, $m )
) {
return;
} else {
- wfDebugLog( 'XMP', __METHOD__
+ wfDebugLog( 'XMP', __METHOD__
. " Expected GPSCoordinate, but got $val." );
$val = null;
return;
$name = $columns[1];
$simpleUpper = $columns[12];
$simpleLower = $columns[13];
-
+
$source = codepointToUtf8( hexdec( $codepoint ) );
if( $simpleUpper ) {
$wikiUpperChars[$source] = codepointToUtf8( hexdec( $simpleUpper ) );
* @param $string String The string
* @return String String with the character codes replaced.
*/
- private static function replaceForNativeNormalize( $string ) {
+ private static function replaceForNativeNormalize( $string ) {
$string = preg_replace(
'/[\x00-\x08\x0b\x0c\x0e-\x1f]/',
UTF8_REPLACEMENT,
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
* http://www.gnu.org/copyleft/gpl.html
- *
+ *
* @file
* @ingroup UtfNormal
*/
/**
* Some constant definitions for the unicode normalization module.
*
- * Note: these constants must all be resolvable at compile time by HipHop,
+ * Note: these constants must all be resolvable at compile time by HipHop,
* since this file will not be executed during request startup for a compiled
* MediaWiki.
*
*
* @file
*/
-
+
UtfNormal::\$utfCombiningClass = unserialize( '$serCombining' );
UtfNormal::\$utfCanonicalComp = unserialize( '$serComp' );
UtfNormal::\$utfCanonicalDecomp = unserialize( '$serCanon' );
* @ingroup Cache
*/
class EhcacheBagOStuff extends BagOStuff {
- var $servers, $cacheName, $connectTimeout, $timeout, $curlOptions,
+ var $servers, $cacheName, $connectTimeout, $timeout, $curlOptions,
$requestData, $requestDataPos;
-
+
var $curls = array();
/**
}
$this->servers = $params['servers'];
$this->cacheName = isset( $params['cache'] ) ? $params['cache'] : 'mw';
- $this->connectTimeout = isset( $params['connectTimeout'] )
+ $this->connectTimeout = isset( $params['connectTimeout'] )
? $params['connectTimeout'] : 1;
$this->timeout = isset( $params['timeout'] ) ? $params['timeout'] : 1;
$this->curlOptions = array(
if ( $response['http_code'] >= 300 ) {
wfDebug( __METHOD__.": GET failure, got HTTP {$response['http_code']}\n" );
wfProfileOut( __METHOD__ );
- return false;
+ return false;
}
$body = $response['body'];
$type = $response['content_type'];
* @return int
*/
protected function attemptPut( $key, $data, $type, $ttl ) {
- // In initial benchmarking, it was 30 times faster to use CURLOPT_POST
+ // In initial benchmarking, it was 30 times faster to use CURLOPT_POST
// than CURLOPT_UPLOAD with CURLOPT_READFUNCTION. This was because
- // CURLOPT_UPLOAD was pushing the request headers first, then waiting
+ // CURLOPT_UPLOAD was pushing the request headers first, then waiting
// for an ACK packet, then sending the data, whereas CURLOPT_POST just
// sends the headers and the data in a single send().
$response = $this->doItemRequest( $key,
*/
protected function createCache( $key ) {
wfDebug( __METHOD__.": creating cache for $key\n" );
- $response = $this->doCacheRequest( $key,
+ $response = $this->doCacheRequest( $key,
array(
CURLOPT_POST => 1,
CURLOPT_CUSTOMREQUEST => 'PUT',
if ( array_diff_key( $curlOptions, $this->curlOptions ) ) {
// var_dump( array_diff_key( $curlOptions, $this->curlOptions ) );
throw new MWException( __METHOD__.": to prevent options set in one doRequest() " .
- "call from affecting subsequent doRequest() calls, only options listed " .
+ "call from affecting subsequent doRequest() calls, only options listed " .
"in \$this->curlOptions may be specified in the \$curlOptions parameter." );
}
$curlOptions += $this->curlOptions;
* @return Mixed
*/
public function replace( $key, $value, $exptime = 0 ) {
- return $this->client->replace( $this->encodeKey( $key ), $value,
+ return $this->client->replace( $this->encodeKey( $key ), $value,
$this->fixExpiry( $exptime ) );
}
return false;
}
if ( substr( $data, -2 ) !== "\r\n" ) {
- $this->_handle_error( $sock,
+ $this->_handle_error( $sock,
'line ending missing from data block from $1' );
return false;
}
}
/**
- * Read the specified number of bytes from a stream. If there is an error,
+ * Read the specified number of bytes from a stream. If there is an error,
* mark the socket dead.
*
* @param $sock The socket
function _fgets( $sock ) {
$result = fgets( $sock );
// fgets() may return a partial line if there is a select timeout after
- // a successful recv(), so we have to check for a timeout even if we
+ // a successful recv(), so we have to check for a timeout even if we
// got a string response.
$data = stream_get_meta_data( $sock );
if ( $data['timed_out'] ) {
public function unlock( $key ) {
return $this->client->unlock( $this->encodeKey( $key ) );
}
-
+
/**
* @param $key string
* @param $value int
*/
/**
- * A cache class that replicates all writes to multiple child caches. Reads
- * are implemented by reading from the caches in the order they are given in
+ * A cache class that replicates all writes to multiple child caches. Reads
+ * are implemented by reading from the caches in the order they are given in
* the configuration until a cache gives a positive result.
*
* @ingroup Cache
/**
* Factory function that creates a memcached client object.
*
- * This always uses the PHP client, since the PECL client has a different
- * hashing scheme and a different interpretation of the flags bitfield, so
+ * This always uses the PHP client, since the PECL client has a different
+ * hashing scheme and a different interpretation of the flags bitfield, so
* switching between the two clients randomly would be disasterous.
*
* @param $params array
/** Associative array mapping module name to info associative array */
protected $moduleInfos = array();
-
+
/** Associative array mapping framework ids to a list of names of test suite modules */
/** like array( 'qunit' => array( 'mediawiki.tests.qunit.suites', 'ext.foo.tests', .. ), .. ) */
protected $testModuleNames = array();
public function getModuleNames() {
return array_keys( $this->moduleInfos );
}
-
+
/**
* Get a list of test module names for one (or all) frameworks.
* If the given framework id is unknkown, or if the in-object variable is not an array,
* NOTE: The mtime of the module's messages is NOT automatically included.
* If you want this to happen, you'll need to call getMsgBlobMtime()
* yourself and take its result into consideration.
- *
+ *
* @param $context ResourceLoaderContext: Context object
* @return Integer: UNIX timestamp
*/
/**
* Gets group name
- *
+ *
* @return String: Name of group
*/
public function getGroup() {
'MediaWiki:Print.css' => array( 'type' => 'style', 'media' => 'print' ),
);
if ( $wgHandheldStyle ) {
- $pages['MediaWiki:Handheld.css'] = array(
- 'type' => 'style',
+ $pages['MediaWiki:Handheld.css'] = array(
+ 'type' => 'style',
'media' => 'handheld' );
}
return $pages;
/**
* Gets group name
- *
+ *
* @return String: Name of group
*/
public function getGroup() {
global $wgUser;
return $this->modifiedTime[$hash] = wfTimestamp( TS_UNIX, $wgUser->getTouched() );
}
-
+
/**
* @param $context ResourceLoaderContext
* @return array
$searchon = $this->db->strencode(join(',', $q));
$field = $this->getIndexField($fulltext);
-
+
// requires Net Search Extender or equivalent
//return " CONTAINS($field, '$searchon') > 0 ";
-
+
return " lcase($field) LIKE lcase('%$searchon%')";
}
* Return a partial WHERE clause to limit the search to the given namespaces
*
* @return String
- * @private
+ * @private
*/
function queryNamespaces() {
$namespaces = implode( ',', $this->namespaces );
$match = $this->parseQuery( $filteredTerm, $fulltext );
$page = $this->db->tableName( 'page' );
$searchindex = $this->db->tableName( 'searchindex' );
-
+
return 'SELECT page_id, page_namespace, page_title, ftindex.[RANK]' .
"FROM $page,FREETEXTTABLE($searchindex , $match, LANGUAGE 'English') as ftindex " .
'WHERE page_id=ftindex.[KEY] ';
*/
function update( $id, $title, $text ) {
// We store the column data as UTF-8 byte order marked binary stream
- // because we are invoking the plain text IFilter on it so that, and we want it
+ // because we are invoking the plain text IFilter on it so that, and we want it
// to properly decode the stream as UTF-8. SQL doesn't support UTF8 as a data type
// but the indexer will correctly handle it by this method. Since all we are doing
// is passing this data to the indexer and never retrieving it via PHP, this will save space
* @ingroup Search
*/
class SearchOracle extends SearchEngine {
-
- private $reservedWords = array ('ABOUT' => 1,
- 'ACCUM' => 1,
- 'AND' => 1,
- 'BT' => 1,
- 'BTG' => 1,
- 'BTI' => 1,
+
+ private $reservedWords = array ('ABOUT' => 1,
+ 'ACCUM' => 1,
+ 'AND' => 1,
+ 'BT' => 1,
+ 'BTG' => 1,
+ 'BTI' => 1,
'BTP' => 1,
- 'FUZZY' => 1,
- 'HASPATH' => 1,
- 'INPATH' => 1,
- 'MINUS' => 1,
- 'NEAR' => 1,
+ 'FUZZY' => 1,
+ 'HASPATH' => 1,
+ 'INPATH' => 1,
+ 'MINUS' => 1,
+ 'NEAR' => 1,
'NOT' => 1,
- 'NT' => 1,
- 'NTG' => 1,
- 'NTI' => 1,
- 'NTP' => 1,
- 'OR' => 1,
- 'PT' => 1,
- 'RT' => 1,
+ 'NT' => 1,
+ 'NTG' => 1,
+ 'NTI' => 1,
+ 'NTP' => 1,
+ 'OR' => 1,
+ 'PT' => 1,
+ 'RT' => 1,
'SQE' => 1,
- 'SYN' => 1,
- 'TR' => 1,
- 'TRSYN' => 1,
- 'TT' => 1,
+ 'SYN' => 1,
+ 'TR' => 1,
+ 'TRSYN' => 1,
+ 'TT' => 1,
'WITHIN' => 1);
/**
), 'SearchOracle::update' );
// Sync the index
- // We need to specify the DB name (i.e. user/schema) here so that
+ // We need to specify the DB name (i.e. user/schema) here so that
// it can work from the installer, where
// ALTER SESSION SET CURRENT_SCHEMA = ...
// was used.
- $dbw->query( "CALL ctx_ddl.sync_index(" .
+ $dbw->query( "CALL ctx_ddl.sync_index(" .
$dbw->addQuotes( $dbw->getDBname() . '.' . $dbw->tableName( 'si_text_idx', 'raw' ) ) . ")" );
- $dbw->query( "CALL ctx_ddl.sync_index(" .
+ $dbw->query( "CALL ctx_ddl.sync_index(" .
$dbw->addQuotes( $dbw->getDBname() . '.' . $dbw->tableName( 'si_title_idx', 'raw' ) ) . ")" );
}
'helptext' => $helptext,
);
}
-
+
function execute() {
if( $this->data['message'] ) {
?>
if ( isset( $this->data['extraInput'] ) && is_array( $this->data['extraInput'] ) ) {
foreach ( $this->data['extraInput'] as $inputItem ) { ?>
<tr>
- <?php
+ <?php
if ( !empty( $inputItem['msg'] ) && $inputItem['type'] != 'checkbox' ) {
- ?><td class="mw-label"><label for="<?php
+ ?><td class="mw-label"><label for="<?php
echo htmlspecialchars( $inputItem['name'] ); ?>"><?php
$this->msgWiki( $inputItem['msg'] ) ?></label><?php
} else {
<input type="<?php echo htmlspecialchars( $inputItem['type'] ) ?>" name="<?php
echo htmlspecialchars( $inputItem['name'] ); ?>"
tabindex="<?php echo $tabIndex++; ?>"
- value="<?php
+ value="<?php
if ( $inputItem['type'] != 'checkbox' ) {
echo htmlspecialchars( $inputItem['value'] );
} else {
echo '1';
- }
+ }
?>" id="<?php echo htmlspecialchars( $inputItem['name'] ); ?>"
- <?php
+ <?php
if ( $inputItem['type'] == 'checkbox' && !empty( $inputItem['value'] ) )
- echo 'checked="checked"';
- ?> /> <?php
+ echo 'checked="checked"';
+ ?> /> <?php
if ( $inputItem['type'] == 'checkbox' && !empty( $inputItem['msg'] ) ) {
?>
<label for="<?php echo htmlspecialchars( $inputItem['name'] ); ?>"><?php
<?php } ?>
</td>
</tr>
-<?php
-
+<?php
}
}
?>
importDump.php
XML dump importer
-
+
importImages.php
Import images into the wiki
-
+
importTextFile.php
Import the contents of a text file into a wiki page
moveBatch.php
- Move a batch of pages
+ Move a batch of pages
namespaceDupes.php
Check articles name to see if they conflict with new/existing namespaces
-- Adds a user,timestamp index to the archive table
-- Used for browsing deleted contributions and renames
-ALTER TABLE /*$wgDBprefix*/archive
+ALTER TABLE /*$wgDBprefix*/archive
ADD INDEX usertext_timestamp ( ar_user_text , ar_timestamp );
---
+--
-- patch-backlinkindexes.sql
---
+--
-- Per bug 6440 / http://bugzilla.wikimedia.org/show_bug.cgi?id=6440
--
-- Improve performance of the "what links here"-type queries
---
+--
ALTER TABLE /*$wgDBprefix*/pagelinks
DROP INDEX pl_namespace,
ALTER TABLE /*$wgDBprefix*/templatelinks
DROP INDEX tl_namespace,
ADD INDEX tl_namespace(tl_namespace, tl_title, tl_from);
-
+
ALTER TABLE /*$wgDBprefix*/imagelinks
DROP INDEX il_to,
ADD INDEX il_to(il_to, il_from);
cat_files int signed NOT NULL default 0,
cat_hidden tinyint(1) unsigned NOT NULL default 0,
-
+
PRIMARY KEY (cat_id),
UNIQUE KEY (cat_title),
CREATE TABLE /*$wgDBprefix*/categorylinks (
-- Key to page_id of the page defined as a category member.
cl_from int unsigned NOT NULL default '0',
-
+
-- Name of the category.
-- This is also the page_title of the category's description page;
-- all such pages are in namespace 14 (NS_CATEGORY).
-- isn't always ideal, but collations seem to be an exciting
-- and dangerous new world in MySQL...
--
- -- Truncate so that the cl_sortkey key fits in 1000 bytes
+ -- Truncate so that the cl_sortkey key fits in 1000 bytes
-- (MyISAM 5 with server_character_set=utf8)
cl_sortkey varchar(70) binary NOT NULL default '',
-
+
-- This isn't really used at present. Provided for an optional
-- sorting method by approximate addition time.
cl_timestamp timestamp NOT NULL,
-
+
UNIQUE KEY cl_from(cl_from,cl_to),
-
+
-- This key is trouble. It's incomplete, AND it's too big
-- when collation is set to UTF-8. Bleeeacch!
KEY cl_sortkey(cl_to,cl_sortkey),
-
+
-- Not really used?
KEY cl_timestamp(cl_to,cl_timestamp)
---
+--
-- patch-categorylinksindex.sql
---
+--
-- Per bug 10280 / http://bugzilla.wikimedia.org/show_bug.cgi?id=10280
--
-- Improve enum continuation performance of the what pages belong to a category query
---
+--
ALTER TABLE /*$wgDBprefix*/categorylinks
DROP INDEX cl_sortkey,
el_from int(8) unsigned NOT NULL default '0',
el_to blob NOT NULL,
el_index blob NOT NULL,
-
+
KEY (el_from, el_to(40)),
KEY (el_to(60), el_from),
KEY (el_index(60))
-- Adding fa_deleted field for additional content suppression
-ALTER TABLE /*$wgDBprefix*/filearchive
+ALTER TABLE /*$wgDBprefix*/filearchive
ADD fa_deleted tinyint unsigned NOT NULL default '0';
-- Adding index to sort by uploader
-ALTER TABLE /*$wgDBprefix*/filearchive
+ALTER TABLE /*$wgDBprefix*/filearchive
ADD INDEX fa_user_timestamp (fa_user_text,fa_timestamp),
-- Remove useless, incomplete index
DROP INDEX fa_deleted_user;
CREATE TABLE /*$wgDBprefix*/filearchive (
-- Unique row id
fa_id int not null auto_increment,
-
+
-- Original base filename; key to image.img_name, page.page_title, etc
fa_name varchar(255) binary NOT NULL default '',
-
+
-- Filename of archived file, if an old revision
fa_archive_name varchar(255) binary default '',
-
+
-- Which storage bin (directory tree or object store) the file data
-- is stored in. Should be 'deleted' for files that have been deleted;
-- any other bin is not yet in use.
fa_storage_group varbinary(16),
-
+
-- SHA-1 of the file contents plus extension, used as a key for storage.
-- eg 8f8a562add37052a1848ff7771a2c515db94baa9.jpg
--
-- If NULL, the file was missing at deletion time or has been purged
-- from the archival storage.
fa_storage_key varbinary(64) default '',
-
+
-- Deletion information, if this file is deleted.
fa_deleted_user int,
fa_deleted_timestamp binary(14) default '',
fa_deleted_reason text,
-
+
-- Duped fields from image
fa_size int unsigned default '0',
fa_width int default '0',
fa_user int unsigned default '0',
fa_user_text varchar(255) binary default '',
fa_timestamp binary(14) default '',
-
+
PRIMARY KEY (fa_id),
INDEX (fa_name, fa_timestamp), -- pick out by image name
INDEX (fa_storage_group, fa_storage_key), -- pick out dupe files
--
--- hitcounter table is used to buffer page hits before they are periodically
+-- hitcounter table is used to buffer page hits before they are periodically
-- counted and added to the cur_counter column in the cur table.
-- December 2003
--
---
+--
-- image-user-index.sql
---
+--
-- Add user/timestamp index to current image versions
---
+--
ALTER TABLE /*$wgDBprefix*/image
ADD INDEX img_usertext_timestamp (img_user_text,img_timestamp);
ALTER TABLE /*$wgDBprefix*/image ADD (
-- Media type as defined by the MEDIATYPE_xxx constants
img_media_type ENUM("UNKNOWN", "BITMAP", "DRAWING", "AUDIO", "VIDEO", "MULTIMEDIA", "OFFICE", "TEXT", "EXECUTABLE", "ARCHIVE") default NULL,
-
+
-- major part of a MIME media type as defined by IANA
-- see http://www.iana.org/assignments/media-types/
img_major_mime ENUM("unknown", "application", "audio", "image", "text", "video", "message", "model", "multipart") NOT NULL default "unknown",
-
+
-- minor part of a MIME media type as defined by IANA
-- the minor parts are not required to adher to any standard
-- but should be consistent throughout the database
---
+--
-- patch-indexes.sql
---
+--
-- Fix up table indexes; new to stable release in November 2003
---
+--
ALTER TABLE IF EXISTS /*$wgDBprefix*/links
DROP INDEX l_from,
CREATE TABLE /*$wgDBprefix*/interwiki (
-- The interwiki prefix, (e.g. "Meatball", or the language prefix "de")
iw_prefix varchar(32) NOT NULL,
-
+
-- The URL of the wiki, with "$1" as a placeholder for an article name.
-- Any spaces in the name will be transformed to underscores before
-- insertion.
iw_url blob NOT NULL,
-
+
-- A boolean value indicating whether the wiki is in this project
-- (used, for example, to detect redirect loops)
iw_local BOOL NOT NULL,
-
+
UNIQUE KEY iw_prefix (iw_prefix)
) /*$wgDBTableOptions*/;
--- Add extra option fields to the ipblocks table, add some extra indexes,
--- convert infinity values in ipb_expiry to something that sorts better,
--- extend ipb_address and range fields, add a unique index for block conflict
+-- Add extra option fields to the ipblocks table, add some extra indexes,
+-- convert infinity values in ipb_expiry to something that sorts better,
+-- extend ipb_address and range fields, add a unique index for block conflict
-- detection.
--- Conflicts in the new unique index can be handled by creating a new
+-- Conflicts in the new unique index can be handled by creating a new
-- table and inserting into it instead of doing an ALTER TABLE.
ipb_expiry varbinary(14) NOT NULL default '',
ipb_range_start tinyblob NOT NULL,
ipb_range_end tinyblob NOT NULL,
-
+
PRIMARY KEY ipb_id (ipb_id),
UNIQUE INDEX ipb_address_unique (ipb_address(255), ipb_user, ipb_auto),
INDEX ipb_user (ipb_user),
) /*$wgDBTableOptions*/;
-INSERT IGNORE INTO /*$wgDBprefix*/ipblocks_newunique
- (ipb_id, ipb_address, ipb_user, ipb_by, ipb_reason, ipb_timestamp, ipb_auto, ipb_expiry, ipb_range_start, ipb_range_end, ipb_anon_only, ipb_create_account)
+INSERT IGNORE INTO /*$wgDBprefix*/ipblocks_newunique
+ (ipb_id, ipb_address, ipb_user, ipb_by, ipb_reason, ipb_timestamp, ipb_auto, ipb_expiry, ipb_range_start, ipb_range_end, ipb_anon_only, ipb_create_account)
SELECT ipb_id, ipb_address, ipb_user, ipb_by, ipb_reason, ipb_timestamp, ipb_auto, ipb_expiry, ipb_range_start, ipb_range_end, 0 , ipb_user=0
FROM /*$wgDBprefix*/ipblocks;
ALTER TABLE /*$wgDBprefix*/ipblocks
ADD ipb_by_text varchar(255) binary NOT NULL default '';
-UPDATE /*$wgDBprefix*/ipblocks
+UPDATE /*$wgDBprefix*/ipblocks
JOIN /*$wgDBprefix*/user ON ipb_by = user_id
SET ipb_by_text = user_name
WHERE ipb_by != 0;
\ No newline at end of file
-- Adding ipb_deleted field for hiding usernames
-ALTER TABLE /*$wgDBprefix*/ipblocks
+ALTER TABLE /*$wgDBprefix*/ipblocks
ADD ipb_deleted bool NOT NULL default 0;
-- Add the range handling fields
-ALTER TABLE /*$wgDBprefix*/ipblocks
+ALTER TABLE /*$wgDBprefix*/ipblocks
ADD ipb_range_start tinyblob NOT NULL default '',
ADD ipb_range_end tinyblob NOT NULL default '',
ADD INDEX ipb_range (ipb_range_start(8), ipb_range_end(8));
-- Initialise fields
-- Only range blocks match ipb_address LIKE '%/%', this fact is used in the code already
-UPDATE /*$wgDBprefix*/ipblocks
- SET
- ipb_range_start = LPAD(HEX(
+UPDATE /*$wgDBprefix*/ipblocks
+ SET
+ ipb_range_start = LPAD(HEX(
(SUBSTRING_INDEX(ipb_address, '.', 1) << 24)
+ (SUBSTRING_INDEX(SUBSTRING_INDEX(ipb_address, '.', 2), '.', -1) << 16)
+ (SUBSTRING_INDEX(SUBSTRING_INDEX(ipb_address, '.', 3), '.', -1) << 24)
+ (SUBSTRING_INDEX(SUBSTRING_INDEX(ipb_address, '/', 1), '.', -1)) ), 8, '0' ),
- ipb_range_end = LPAD(HEX(
+ ipb_range_end = LPAD(HEX(
(SUBSTRING_INDEX(ipb_address, '.', 1) << 24)
+ (SUBSTRING_INDEX(SUBSTRING_INDEX(ipb_address, '.', 2), '.', -1) << 16)
+ (SUBSTRING_INDEX(SUBSTRING_INDEX(ipb_address, '.', 3), '.', -1) << 24)
---
+--
-- Track inline interwiki links
--
CREATE TABLE /*_*/iwlinks (
-- page_id of the referring page
iwl_from int unsigned NOT NULL default 0,
-
+
-- Interwiki prefix code of the target
iwl_prefix varbinary(20) NOT NULL default '',
-- Jobs performed by parallel apache threads or a command-line daemon
CREATE TABLE /*_*/job (
job_id int unsigned NOT NULL PRIMARY KEY AUTO_INCREMENT,
-
+
-- Command name
-- Limited to 60 to prevent key length overflow
job_cmd varbinary(60) NOT NULL default '',
CREATE TABLE /*$wgDBprefix*/langlinks (
-- page_id of the referring page
ll_from int unsigned NOT NULL default '0',
-
+
-- Language code of the target
ll_lang varbinary(20) NOT NULL default '',
CREATE TABLE /*$wgDBprefix*/links (
-- Key to the page_id of the page containing the link.
l_from int unsigned NOT NULL default '0',
-
+
-- Key to the page_id of the link target.
-- An unfortunate consequence of this is that rename
-- operations require changing the links entries for
-- all links to the moved page.
l_to int unsigned NOT NULL default '0',
-
+
UNIQUE KEY l_from(l_from,l_to),
KEY (l_to)
CREATE TABLE /*$wgDBprefix*/brokenlinks (
-- Key to the page_id of the page containing the link.
bl_from int unsigned NOT NULL default '0',
-
+
-- Text of the target page title ("namesapce:title").
-- Unfortunately this doesn't split the namespace index
-- key and therefore can't easily be joined to anything.
CREATE TABLE /*$wgDBprefix*/imagelinks (
-- Key to page_id of the page containing the image / media link.
il_from int unsigned NOT NULL default '0',
-
+
-- Filename of target image.
-- This is also the page_title of the file's description page;
-- all such pages are in namespace 6 (NS_FILE).
il_to varchar(255) binary NOT NULL default '',
-
+
UNIQUE KEY il_from(il_from,il_to),
KEY (il_to)
-- Rename the primary unique index from PRIMARY to ls_field_val
-- This is for MySQL only and is necessary only for databases which were updated
-- between MW 1.16 development revisions r50567 and r51465.
-ALTER TABLE /*_*/log_search
- DROP PRIMARY KEY,
+ALTER TABLE /*_*/log_search
+ DROP PRIMARY KEY,
ADD UNIQUE INDEX ls_field_val (ls_field,ls_value,ls_log_id);
-ALTER TABLE /*$wgDBprefix*/logging
+ALTER TABLE /*$wgDBprefix*/logging
ADD log_user_text varchar(255) binary NOT NULL default '',
ADD log_page int unsigned NULL,
CHANGE log_type log_type varbinary(32) NOT NULL,
---
+--
-- patch-logging-times-index.sql
---
+--
-- Add a very humble index on logging times
---
+--
ALTER TABLE /*$wgDBprefix*/logging
ADD INDEX times (log_timestamp);
-- action field, but only the type controls categorization.
log_type varbinary(10) NOT NULL default '',
log_action varbinary(10) NOT NULL default '',
-
+
-- Timestamp. Duh.
log_timestamp binary(14) NOT NULL default '19700101000000',
-
+
-- The user who performed this action; key to user_id
log_user int unsigned NOT NULL default 0,
-
+
-- Key to the page affected. Where a user is the target,
-- this will point to the user page.
log_namespace int NOT NULL default 0,
log_title varchar(255) binary NOT NULL default '',
-
+
-- Freeform text. Interpreted as edit history comments.
log_comment varchar(255) NOT NULL default '',
-
+
-- LF separated list of miscellaneous parameters
log_params blob NOT NULL,
ALTER TABLE /*_*/image
MODIFY COLUMN img_minor_mime varbinary(100) NOT NULL default "unknown";
-
+
ALTER TABLE /*_*/oldimage
MODIFY COLUMN oi_minor_mime varbinary(100) NOT NULL default "unknown";
-
+
INSERT INTO /*_*/updatelog(ul_key) VALUES ('mime_minor_length');
CREATE TABLE /*_*/msg_resource (
-- Resource name
mr_resource varbinary(255) NOT NULL,
- -- Language code
+ -- Language code
mr_lang varbinary(32) NOT NULL,
-- JSON blob. This is an incomplete JSON object, i.e. without the wrapping {}
mr_blob mediumblob NOT NULL,
---
+--
-- patch-oi_metadata.sql
---
+--
-- Add data to allow for direct reference to old images
-- Some re-indexing here.
-- Old images can be included into pages effeciently now.
---
+--
ALTER TABLE /*$wgDBprefix*/oldimage
DROP INDEX oi_name,
---
+--
-- oldimage-user-index.sql
---
+--
-- Add user/timestamp index to old image versions
---
+--
ALTER TABLE /*$wgDBprefix*/oldimage
ADD INDEX oi_usertext_timestamp (oi_user_text,oi_timestamp);
--
-- Create the new pagelinks table to merge links and brokenlinks data,
-- and populate it.
---
+--
-- Unlike the old links and brokenlinks, these records will not need to be
-- altered when target pages are created, deleted, or renamed. This should
-- reduce the amount of severe database frustration that happens when widely-
CREATE TABLE /*$wgDBprefix*/pagelinks (
-- Key to the page_id of the page containing the link.
pl_from int unsigned NOT NULL default '0',
-
+
-- Key to page_namespace/page_title of the target page.
-- The target page may or may not exist, and due to renames
-- and deletions may refer to different page records as time
-- goes by.
pl_namespace int NOT NULL default '0',
pl_title varchar(255) binary NOT NULL default '',
-
+
UNIQUE KEY pl_from(pl_from,pl_namespace,pl_title),
KEY (pl_namespace,pl_title)
--
--- parsercache table, for cacheing complete parsed articles
+-- parsercache table, for cacheing complete parsed articles
-- before they are imbedded in the skin.
--
---
+--
-- patch-pl-tl-il-unique-index.sql
---
+--
-- Make reorderings of UNIQUE indices UNIQUE as well
DROP INDEX /*i*/pl_namespace ON /*_*/pagelinks;
CREATE TABLE /*$wgDBprefix*/querycache (
-- A key name, generally the base name of of the special page.
qc_type varbinary(32) NOT NULL,
-
+
-- Some sort of stored value. Sizes, counts...
qc_value int unsigned NOT NULL default '0',
-
+
-- Target namespace+title
qc_namespace int NOT NULL default '0',
qc_title varchar(255) binary NOT NULL default '',
-
+
KEY (qc_type,qc_value)
) /*$wgDBTableOptions*/;
CREATE TABLE /*$wgDBprefix*/querycachetwo (
-- A key name, generally the base name of of the special page.
qcc_type varbinary(32) NOT NULL,
-
+
-- Some sort of stored value. Sizes, counts...
qcc_value int unsigned NOT NULL default '0',
-
+
-- Target namespace+title
qcc_namespace int NOT NULL default '0',
qcc_title varchar(255) binary NOT NULL default '',
-
+
-- Target namespace+title2
qcc_namespacetwo int NOT NULL default '0',
qcc_titletwo varchar(255) binary NOT NULL default '',
-- Adding rc_deleted field for revisiondelete
-- Add rc_logid to match log_id
-ALTER TABLE /*$wgDBprefix*/recentchanges
+ALTER TABLE /*$wgDBprefix*/recentchanges
ADD rc_deleted tinyint unsigned NOT NULL default '0',
ADD rc_logid int unsigned NOT NULL default '0',
ADD rc_log_type varbinary(255) NULL default NULL,
-- Primary key in recentchanges
-ALTER TABLE /*$wgDBprefix*/recentchanges
+ALTER TABLE /*$wgDBprefix*/recentchanges
ADD rc_id int NOT NULL auto_increment,
ADD PRIMARY KEY rc_id (rc_id);
-- Adding the rc_ip field for logging of IP addresses in recentchanges
-ALTER TABLE /*$wgDBprefix*/recentchanges
+ALTER TABLE /*$wgDBprefix*/recentchanges
ADD rc_ip varbinary(40) NOT NULL default '',
ADD INDEX rc_ip (rc_ip);
--
-- Create the new redirect table.
-- For each redirect, this table contains exactly one row defining its target
---
+--
CREATE TABLE /*$wgDBprefix*/redirect (
-- Key to the page_id of the redirect page
rd_from int unsigned NOT NULL default '0',
---
+--
-- Recreates the iwl_prefix index for the iwlinks table
--
CREATE UNIQUE INDEX /*i*/iwl_prefix_title_from ON /*_*/iwlinks (iwl_prefix, iwl_title, iwl_from);
rev_minor_edit tinyint unsigned NOT NULL default '0',
rev_deleted tinyint unsigned NOT NULL default '0',
-
PRIMARY KEY rev_page_id (rev_page, rev_id),
UNIQUE INDEX rev_id (rev_id),
INDEX rev_timestamp (rev_timestamp),
-- old_id int(8) unsigned NOT NULL auto_increment,
-- old_text mediumtext NOT NULL,
-- old_flags tinyblob NOT NULL,
---
+--
-- PRIMARY KEY old_id (old_id)
-- );
CREATE TABLE /*$wgDBprefix*/searchindex (
-- Key to page_id
si_page int unsigned NOT NULL,
-
+
-- Munged version of title
si_title varchar(255) NOT NULL default '',
-
+
-- Munged version of body text
si_text mediumtext NOT NULL,
-
+
UNIQUE KEY (si_page)
) ENGINE=MyISAM;
CREATE TABLE /*$wgDBprefix*/templatelinks (
-- Key to the page_id of the page containing the link.
tl_from int unsigned NOT NULL default '0',
-
+
-- Key to page_namespace/page_title of the target page.
-- The target page may or may not exist, and due to renames
-- and deletions may refer to different page records as time
-- goes by.
tl_namespace int NOT NULL default '0',
tl_title varchar(255) binary NOT NULL default '',
-
+
UNIQUE KEY tl_from(tl_from,tl_namespace,tl_title),
KEY (tl_namespace,tl_title)
-
) /*$wgDBTableOptions*/;
create table /*$wgDBprefix*/testrun (
tr_id int not null auto_increment,
-
+
tr_date char(14) binary,
tr_mw_version blob,
tr_php_version blob,
tr_db_version blob,
tr_uname blob,
-
+
primary key (tr_id)
) engine=InnoDB;
ti_run int not null,
ti_name varchar(255),
ti_success bool,
-
+
unique key (ti_run, ti_name),
key (ti_run, ti_success),
-
+
foreign key (ti_run) references /*$wgDBprefix*/testrun(tr_id)
on delete cascade
) engine=InnoDB;
--
--- Store information about newly uploaded files before they're
+-- Store information about newly uploaded files before they're
-- moved into the actual filestore
--
CREATE TABLE /*_*/uploadstash (
us_id int unsigned NOT NULL PRIMARY KEY auto_increment,
-
+
-- the user who uploaded the file.
us_user int unsigned NOT NULL,
-- the original path
us_orig_path varchar(255) NOT NULL,
-
+
-- the temporary path at which the file is actually stored
us_path varchar(255) NOT NULL,
-
+
-- which type of upload the file came from (sometimes)
us_source_type varchar(50),
-
+
-- the date/time on which the file was added
us_timestamp varbinary(14) not null,
-
+
us_status varchar(50) not null,
-- file properties from File::getPropsFromPath. these may prove unnecessary.
us_image_width int unsigned,
us_image_height int unsigned,
us_image_bits smallint unsigned
-
) /*$wgDBTableOptions*/;
-- sometimes there's a delete for all of a user's stuff.
-- Add a 'real name' field where users can specify the name they want
-- used for author attribution or other places that real names matter.
-ALTER TABLE user
+ALTER TABLE user
ADD (user_real_name varchar(255) binary NOT NULL default '');
--- Stores the groups the user has once belonged to.
+-- Stores the groups the user has once belonged to.
-- The user may still belong these groups. Check user_groups.
CREATE TABLE /*_*/user_former_groups (
-- Key to user_id
CREATE TABLE /*$wgDBprefix*/user_groups (
-- Key to user_id
ug_user int unsigned NOT NULL default '0',
-
+
-- Group names are short symbolic string keys.
-- The set of group names is open-ended, though in practice
-- only some predefined ones are likely to be used.
-- permissions of any group they're explicitly in, plus
-- the implicit '*' and 'user' groups.
ug_group varbinary(16) NOT NULL default '',
-
+
PRIMARY KEY (ug_user,ug_group),
KEY (ug_group)
) /*$wgDBTableOptions*/;
CREATE TABLE /*_*/user_properties(
-- Foreign key to user.user_id
up_user int not null,
-
+
-- Name of the option being saved. This is indexed for bulk lookup.
up_property varbinary(32) not null,
-
+
-- Property value as a string.
up_value blob
) /*$wgDBTableOptions*/;
CREATE TABLE /*$wgDBprefix*/user_rights (
-- Key to user_id
ur_user int unsigned NOT NULL,
-
+
-- Comma-separated list of permission keys
ur_rights tinyblob NOT NULL,
-
+
UNIQUE KEY ur_user (ur_user)
) /*$wgDBTableOptions*/;
CSSJanus is CSS parser utility designed to aid the conversion of a website's
layout from left-to-right(LTR) to right-to-left(RTL). The script was born out of
-a need to convert CSS for RTL languages when tables are not being used for layout (since tables will automatically reorder TD's in RTL).
+a need to convert CSS for RTL languages when tables are not being used for layout (since tables will automatically reorder TD's in RTL).
CSSJanus will change most of the obvious CSS property names and their values as
-well as some not-so-obvious ones (cursor, background-position %, etc...).
-The script is designed to offer flexibility to account for cases when you do
+well as some not-so-obvious ones (cursor, background-position %, etc...).
+The script is designed to offer flexibility to account for cases when you do
not want to change certain rules which exist to account for bidirectional text
display bugs, as well as situations where you may or may not want to flip annotations inside of the background url string.
-Note that you can disable CSSJanus from running on an entire class or any
+Note that you can disable CSSJanus from running on an entire class or any
rule within a class by prepending a /* @noflip */ comment before the rule(s)
you want CSSJanus to ignore.
CSSJanus itself is not always enough to make a website that works in a LTR
-language context work in a RTL language all the way, but it is a start.
+language context work in a RTL language all the way, but it is a start.
==Getting the code==
{{{
$ svn checkout http://cssjanus.googlecode.com/svn/trunk/ cssjanus
-}}}
+}}}
==Using==
--swap_left_right_in_url: Fixes "left"/"right" string within urls.
Ex: ./cssjanus.py --swap_left_right_in_url < file.css > file_rtl.css
--swap_ltr_rtl_in_url: Fixes "ltr"/"rtl" string within urls.
- Ex: ./cssjanus.py --swap_ltr_rtl_in_url < file.css > file_rtl.css
-
+ Ex: ./cssjanus.py --swap_ltr_rtl_in_url < file.css > file_rtl.css
+
If you'd like to make use of the webapp version of cssjanus, you'll need to
download the Google App Engine SDK
http://code.google.com/appengine/downloads.html
and also drop a "django" directory into this directory, with the latest svn
-from django. You should be good to go with that setup. Please let me know
+from django. You should be good to go with that setup. Please let me know
otherwise.
==Bugs, Patches==
{{{
Copyright 2008 Google Inc. All Rights Reserved.
-
+
Licensed under the Apache License, Version 2.0 (the 'License');
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
-
+
http://www.apache.org/licenses/LICENSE-2.0
-
+
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an 'AS IS' BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#!/usr/bin/hphpi -f
+#!/usr/bin/hphpi -f
<?php
define( 'MW_CONFIG_CALLBACK', 'MakeHipHop::noConfigNeeded' );
unlink( "$buildDir/source" );
}
- # With the CentOS RPMs, you just get g++44, no g++, so we have to
+ # With the CentOS RPMs, you just get g++44, no g++, so we have to
# use the environment
if ( isset( $_ENV['CXX'] ) ) {
$cxx = $_ENV['CXX'];
$cxx = 'g++';
}
- # Create a function that provides the HipHop compiler version, and
+ # Create a function that provides the HipHop compiler version, and
# doesn't exist when MediaWiki is invoked in interpreter mode.
$version = str_replace( PHP_EOL, ' ', trim( `hphp --version` ) );
file_put_contents(
$this->checkVolatileClasses( $outDir );
# Copy the generated C++ files into the source directory for cmake
- $iter = new RecursiveIteratorIterator(
+ $iter = new RecursiveIteratorIterator(
new RecursiveDirectoryIterator( $outDir ),
RecursiveIteratorIterator::SELF_FIRST );
$sourceFiles = array();
}
# Do our own version of $HPHP_HOME/bin/run.sh, which isn't so broken.
- # HipHop's RELEASE mode seems to be stuck always on, so symbols get
- # stripped. Also we will try keeping the generated .o files instead of
+ # HipHop's RELEASE mode seems to be stuck always on, so symbols get
+ # stripped. Also we will try keeping the generated .o files instead of
# throwing away hours of CPU time every time you make a typo.
chdir( $persistentDir );
if ( $regenerateMakefile ) {
- copy( $_ENV['HPHP_HOME'] . '/bin/CMakeLists.base.txt',
+ copy( $_ENV['HPHP_HOME'] . '/bin/CMakeLists.base.txt',
"$persistentDir/CMakeLists.txt" );
if ( file_exists( "$persistentDir/CMakeCache.txt" ) ) {
$cmd = 'cmake' .
" -D CMAKE_BUILD_TYPE:string=" . wfEscapeShellArg( $GLOBALS['wgHipHopBuildType'] ) .
' -D PROGRAM_NAME:string=mediawiki-hphp';
-
+
if ( file_exists( '/usr/bin/ccache' ) ) {
$cmd .= ' -D CMAKE_CXX_COMPILER:string=ccache' .
' -D CMAKE_CXX_COMPILER_ARG1:string=' . wfEscapeShellArg( $cxx );
# Determine appropriate make concurrency
# Compilation can take a lot of memory, let's assume that that is limiting.
$procs = $this->getNumProcs();
-
+
# Run make. This is the slow step.
passthru( 'make -j' . wfEscapeShellArg( $procs ) );
$sourceBase = realpath( "$IP/.." );
}
- passthru(
+ passthru(
'cd ' . wfEscapeShellArg( $sourceBase ) . " && " .
'MW_INSTALL_PATH=' . wfEscapeShellArg( $IP ) . ' ' .
- wfEscapeShellArg(
+ wfEscapeShellArg(
"$buildDir/persistent/mediawiki-hphp",
'-c', "$thisDir/server.conf",
'-v', "Server.SourceRoot=$sourceBase",
cl_from BIGINT NOT NULL DEFAULT 0,
-- REFERENCES page(page_id) ON DELETE CASCADE,
cl_to VARCHAR(255) NOT NULL,
- -- cl_sortkey has to be at least 86 wide
+ -- cl_sortkey has to be at least 86 wide
-- in order to be compatible with the old MySQL schema from MW 1.10
--cl_sortkey VARCHAR(86),
cl_sortkey VARCHAR(230) FOR BIT DATA NOT NULL ,
CREATE TABLE user_properties (
-- Foreign key to user.user_id
up_user BIGINT NOT NULL,
-
+
-- Name of the option being saved. This is indexed for bulk lookup.
up_property VARCHAR(32) FOR BIT DATA NOT NULL,
-
+
-- Property value as a string.
up_value CLOB(64K) INLINE LENGTH 4096
);
-- This is read and executed by the install script; you should
-- not have to run it by itself unless doing a manual install.
--- Notes:
+-- Notes:
-- * DB2 will convert all table and column names to all caps internally.
-- * DB2 has a 32k limit on SQL filesize, so it may be necessary
-- to split this into two files soon.
-- REFERENCES page (page_id) ON DELETE CASCADE,
pp_propname VARCHAR(255) NOT NULL,
pp_value CLOB(64K) INLINE LENGTH 4096 NOT NULL,
- PRIMARY KEY (pp_page, pp_propname)
+ PRIMARY KEY (pp_page, pp_propname)
);
CREATE INDEX page_props_propname
ON page_props (pp_propname);
pl_namespace SMALLINT NOT NULL,
pl_title VARCHAR(255) NOT NULL
);
-CREATE UNIQUE INDEX pagelink_unique
+CREATE UNIQUE INDEX pagelink_unique
ON pagelinks (pl_from, pl_namespace, pl_title);
cl_from BIGINT NOT NULL DEFAULT 0,
-- REFERENCES page(page_id) ON DELETE CASCADE,
cl_to VARCHAR(255) NOT NULL,
- -- cl_sortkey has to be at least 86 wide
+ -- cl_sortkey has to be at least 86 wide
-- in order to be compatible with the old MySQL schema from MW 1.10
--cl_sortkey VARCHAR(86),
cl_sortkey VARCHAR(230) FOR BIT DATA NOT NULL,
);
CREATE INDEX ipb_address
ON ipblocks (ipb_address);
-CREATE INDEX ipb_user
+CREATE INDEX ipb_user
ON ipblocks (ipb_user);
CREATE INDEX ipb_range
ON ipblocks (ipb_range_start, ipb_range_end);
rc_log_type VARCHAR(255),
rc_log_action VARCHAR(255),
rc_params CLOB(64K) INLINE LENGTH 4096
-
);
CREATE INDEX rc_timestamp
ON recentchanges (rc_timestamp);
--
--- Store information about newly uploaded files before they're
+-- Store information about newly uploaded files before they're
-- moved into the actual filestore
--
CREATE TABLE uploadstash (
--- Stores the groups the user has once belonged to.
+-- Stores the groups the user has once belonged to.
-- The user may still belong these groups. Check user_groups.
CREATE TABLE user_former_groups (
ufg_user BIGINT NOT NULL DEFAULT 0,
CREATE INDEX /*$wgDBprefix*/user_group_id ON /*$wgDBprefix*/user_newtalk([user_id]);
CREATE INDEX /*$wgDBprefix*/user_ip ON /*$wgDBprefix*/user_newtalk(user_ip);
---
+--
-- User preferences and other fun stuff
-- replaces old user.user_options BLOB
---
+--
CREATE TABLE /*$wgDBprefix*/user_properties (
up_user INT NOT NULL,
up_property NVARCHAR(32) NOT NULL,
-- The fields generally correspond to the page, revision, and text
-- fields, with several caveats.
-- Cannot reasonably create views on this table, due to the presence of TEXT
--- columns.
+-- columns.
CREATE TABLE /*$wgDBprefix*/archive (
ar_namespace SMALLINT NOT NULL DEFAULT 0,
ar_title NVARCHAR(255) NOT NULL DEFAULT '',
CREATE INDEX /*$wgDBprefix*/cl_timestamp ON /*$wgDBprefix*/categorylinks(cl_to,cl_timestamp);
--;
---
+--
-- Track all existing categories. Something is a category if 1) it has an en-
-- try somewhere in categorylinks, or 2) it once did. Categories might not
-- have corresponding pages, so they need to be tracked separately.
vt_tag varchar(255) NOT NULL PRIMARY KEY
);
---
+--
-- Table for storing localisation data
---
+--
CREATE TABLE /*$wgDBprefix*/l10n_cache (
-- language code
lc_lang NVARCHAR(32) NOT NULL,
-
+
-- cache key
lc_key NVARCHAR(255) NOT NULL,
-
+
-- Value
lc_value TEXT NOT NULL DEFAULT '',
);
-- Maximum key length ON SQL Server is 900 bytes
CREATE INDEX /*$wgDBprefix*/externallinks_index ON /*$wgDBprefix*/externallinks(el_index);
---
+--
-- Track external user accounts, if ExternalAuth is used
---
+--
CREATE TABLE /*$wgDBprefix*/external_user (
-- Foreign key to user_id
eu_local_id INT NOT NULL PRIMARY KEY,
);
CREATE UNIQUE INDEX /*$wgDBprefix*/langlinks_reverse_key ON /*$wgDBprefix*/langlinks(ll_lang,ll_title);
---
+--
-- Track inline interwiki links
---
+--
CREATE TABLE /*$wgDBprefix*/iwlinks (
-- page_id of the referring page
iwl_from INT NOT NULL DEFAULT 0,
-
+
-- Interwiki prefix code of the target
iwl_prefix NVARCHAR(20) NOT NULL DEFAULT '',
-
+
-- Title of the target, including namespace
iwl_title NVARCHAR(255) NOT NULL DEFAULT '',
);
PRIMARY KEY (ul_key)
);
--- NOTE To enable full text indexing on SQL 2008 you need to create an account FDH$MSSQLSERVER
+-- NOTE To enable full text indexing on SQL 2008 you need to create an account FDH$MSSQLSERVER
-- AND assign a password for the FDHOST process to run under
-- Once you have assigned a password to that account, you need to run the following stored procedure
-- replacing XXXXX with the password you used.
ON &mw_prefix.testrun
BEGIN
SELECT testrun_tr_id_seq.NEXTVAL into :NEW.tr_id FROM dual;
-END;
+END;
CREATE TABLE /*$wgDBprefix*/testitem (
ti_run NUMBER NOT NULL REFERENCES &mw_prefix.testrun (tr_id) ON DELETE CASCADE,
ti_run INTEGER NOT NULL REFERENCES testrun(tr_id) ON DELETE CASCADE,
ti_name TEXT NOT NULL,
ti_success SMALLINT NOT NULL
-);
+);
CREATE UNIQUE INDEX testitem_uniq ON testitem(ti_run, ti_name);
---
+--
-- Recreates the iwl_prefix for the iwlinks table
--
DROP INDEX IF EXISTS /*i*/iwl_prefix;
-- Patch that introduces fulltext search capabilities to SQLite schema
-- Requires that SQLite must be compiled with FTS3 module (comes with core amalgamation).
-- See http://sqlite.org/fts3.html for details of syntax.
--- Will fail if FTS3 is not present,
+-- Will fail if FTS3 is not present,
DROP TABLE IF EXISTS /*_*/searchindex;
CREATE VIRTUAL TABLE /*_*/searchindex USING FTS3(
-- Key to page_id
-- Munged version of title
si_title,
-
+
-- Munged version of body text
si_text
);
-- Munged version of title
si_title TEXT,
-
+
-- Munged version of body text
si_text TEXT
);
fi
if [ -z $3 ]; then
table=blobs
-else
+else
table=$3
fi
echo "CREATE DATABASE $2" | mysql -u wikiadmin -p`wikiadmin_pass` -h $1 && \
sed "s/blobs\>/$table/" blobs.sql | mysql -u wikiadmin -p`wikiadmin_pass` -h $1 $2
-
-
$out->addModuleStyles( 'mediawiki.legacy.oldshared' );
$out->addModuleStyles( 'skins.cologneblue' );
}
-
+
/**
* Override langlink formatting behavior not to uppercase the language names.
* See otherLanguages() in CologneBlueTemplate.
$this->printTrail();
echo "\n</body></html>";
}
-
/**
* Language/charset variant links for classic-style skins
return $this->getSkin()->getLanguage()->pipeList( $s );
}
-
+
// @fixed
function otherLanguages() {
global $wgHideInterlanguageLinks;
function pageTitleLinks() {
$s = array();
$footlinks = $this->getFooterLinks();
-
+
foreach ( $footlinks['places'] as $item ) {
$s[] = $this->data[$item];
}
-
+
return $this->getSkin()->getLanguage()->pipeList( $s );
}
return $s;
}
-
+
/**
* @return string
- *
+ *
* @fixed
*/
function beforeContent() {
</a>
</p>
<p id="sitesub"><?php echo wfMessage( 'sitesubtitle' )->escaped() ?></p>
-
<div id="toplinks">
<p id="syslinks"><?php echo $this->sysLinks() ?></p>
<p id="variantlinks"><?php echo $this->variantLinks() ?></p>
<?php
$s = ob_get_contents();
ob_end_clean();
-
+
return $s;
}
/**
* @return string
- *
+ *
* @fixed
*/
function afterContent() {
// Page-related links
echo $this->bottomLinks();
echo "\n<br />";
-
+
// Footer and second searchbox
echo $this->getSkin()->getLanguage()->pipeList( array(
$this->getSkin()->mainPageLink(),
$this->searchForm( 'footer' )
) );
echo "\n<br />";
-
+
// Standard footer info
$footlinks = $this->getFooterLinks();
if ( $footlinks['info'] ) {
/**
* @return string
- *
+ *
* @fixed
*/
function sysLinks() {
return $this->getSkin()->getLanguage()->pipeList( $s );
}
-
-
-
/**
* @param $heading string
* @return string
- *
+ *
* @fixed
*/
function menuHead( $heading ) {
* @access private
*
* @return string
- *
+ *
* @fixed
*/
function quickBar(){
$s = "\n<div id='quickbar'>";
$sep = "<br />\n";
-
+
$plain_bar = $this->data['sidebar'];
$bar = array();
-
+
// Massage the sidebar
// We want to place SEARCH at the beginning and a lot of stuff before TOOLBOX (or at the end, if it's missing)
$additions_done = false;
while ( !$additions_done ) {
$bar = array(); // Empty it out
-
+
// Always display search on top
$bar['SEARCH'] = true;
-
+
foreach ( $plain_bar as $heading => $links ) {
if ( $heading == 'TOOLBOX' ) {
if( $links !== NULL ) {
// If this is not a toolbox prosthetic we inserted outselves, fill it out
$plain_bar['TOOLBOX'] = $this->getToolbox();
}
-
+
// And insert the stuff
-
+
// "This page" and "Edit" menus
// We need to do some massaging here... we reuse all of the items, except for $...['views']['view'],
// as $...['namespaces']['main'] and $...['namespaces']['talk'] together serve the same purpose.
);
$bar['qbedit'] = $qbedit;
$bar['qbpageoptions'] = $qbpageoptions;
-
+
// Personal tools ("My pages")
$bar['qbmyoptions'] = $this->getPersonalTools();
foreach ( array ( 'logout', 'createaccount', 'login', 'anonlogin' ) as $key ) {
$bar['qbmyoptions'][$key] = false;
}
-
+
$additions_done = true;
}
-
+
// Re-insert current heading, unless it's SEARCH
if ( $heading != 'SEARCH' ) {
$bar[$heading] = $plain_bar[$heading];
}
}
-
+
// If TOOLBOX is missing, $additions_done is still false
if ( !$additions_done ) {
$plain_bar['TOOLBOX'] = false;
}
}
-
+
foreach ( $bar as $heading => $links ) {
if ( $heading == 'SEARCH' ) {
$s .= $this->menuHead( wfMessage( 'qbfind' )->text() );
if ( $heading == 'TOOLBOX' ) {
$heading = 'toolbox';
}
-
+
$headingMsg = wfMessage( $heading );
$any_link = false;
$t = $this->menuHead( $headingMsg->exists() ? $headingMsg->text() : $heading );
-
+
foreach ( $links as $key => $link ) {
// Can be empty due to rampant sidebar massaging we're doing above
if ( $link ) {
$t .= $this->makeListItem( $key, $link, array( 'tag' => 'span' ) ) . $sep;
}
}
-
+
if ( $any_link ) {
$s .= $t;
}
/**
* @param $label string
* @return string
- *
+ *
* @fixed
*/
function searchForm( $which ) {
/**
* Adds classes to the body element.
- *
+ *
* @param $out OutputPage object
* @param &$bodyAttrs Array of attributes that will be set on the body element
*/