Skip to content

Instantly share code, notes, and snippets.

@faridiot
Created August 10, 2016 04:00
Show Gist options
  • Save faridiot/664885c9872218c926409711f8e63de6 to your computer and use it in GitHub Desktop.
Save faridiot/664885c9872218c926409711f8e63de6 to your computer and use it in GitHub Desktop.

devel

  • added a memory expection of V8 memory gets to low

  • fixed epoch computation in hybrid logical clock

  • fixed thread affinity

  • replaced require("internal").db by require("@arangodb").db

  • added option --skip-lines for arangoimp this allows skipping the first few lines from the import file in case the CSV or TSV import are used

  • fixed periodic jobs: there should be only one instance running - even if it runs longer than the period

  • improved performance of primary index and edge index lookups

  • optimizations for AQL [*] operator in case no filter, no projection and no offset/limit are used

  • added AQL function OUTERSECTION to return the symmetric difference of its input arguments

  • Foxx manifests of installed services are now saved to disk with indentation

  • Foxx tests and scripts in development mode should now always respect updated files instead of loading stale modules

  • When disabling Foxx development mode the setup script is now re-run

  • Foxx now provides an easy way to directly serve GraphQL requests using the @arangodb/foxx/graphql module and the bundled graphql-sync dependency

  • Foxx OAuth2 module now correctly passes the access_token to the OAuth2 server

  • added timezone module

v3.0.5 (XXXX-XX-XX)

  • fixed issue #1977

  • fixed extraction of _id attribute in AQL traversal conditions

v3.0.4 (2016-08-01)

  • added missing lock for periodic jobs access

  • fix multiple foxx related cluster issues

  • fix handling of empty AQL query strings

  • fixed issue in INTERSECTION AQL function with duplicate elements in the source arrays

  • fixed issue #1970

  • fixed issue #1968

  • fixed issue #1967

  • fixed issue #1962

  • fixed issue #1959

  • replaced require("internal").db by require("@arangodb").db

  • fixed issue #1954

  • fixed issue #1953

  • fixed issue #1950

  • fixed issue #1949

  • fixed segfault in V8, by backporting https://bugs.chromium.org/p/v8/issues/detail?id=5033

  • Foxx OAuth2 module now correctly passes the access_token to the OAuth2 server

v3.0.3 (2016-07-17)

  • fixed issue #1942

  • fixed issue #1941

  • fixed array index batch insertion issues for hash indexes that caused problems when no elements remained for insertion

  • fixed AQL MERGE() function with External objects originating from traversals

  • fixed some logfile recovery errors with error message "document not found"

  • fixed issue #1937

  • fixed issue #1936

  • improved performance of arangorestore in clusters with synchronous replication

v3.0.2 (2016-07-09)

  • fixed assertion failure in case multiple remove operations were used in the same query

  • fixed upsert behavior in case upsert was used in a loop with the same document example

  • fixed issue #1930

  • don't expose local file paths in Foxx error messages.

  • fixed issue #1929

  • make arangodump dump the attribute isSystem when dumping the structure of a collection, additionally make arangorestore not fail when the attribute is missing

  • fixed "Could not extract custom attribute" issue when using COLLECT with MIN/MAX functions in some contexts

  • honor presence of persistent index for sorting

  • make AQL query optimizer not skip "use-indexes-rule", even if enough plans have been created already

  • make AQL optimizer not skip "use-indexes-rule", even if enough execution plans have been created already

  • fix double precision value loss in VelocyPack JSON parser

  • added missing SSL support for arangorestore

  • improved cluster import performance

  • fix Foxx thumbnails on DC/OS

  • fix Foxx configuration not being saved

  • fix Foxx app access from within the frontend on DC/OS

  • add option --default-replication-factor to arangorestore and simplify the control over the number of shards when restoring

  • fix a bug in the VPack -> V8 conversion if special attributes _key, _id, _rev, _from and _to had non-string values, which is allowed below the top level

  • fix malloc_usable_size for darwin

v3.0.1 (XXXX-XX-XX)

  • increase max. number of collections in AQL queries from 32 to 256

  • fixed issue #1916: header "authorization" is required" when opening services page

  • fixed issue #1915: Explain: member out of range

  • fixed issue #1914: fix unterminated buffer

  • don't remove lockfile if we are the same (now stale) pid
    fixes docker setups (our pid will always be 1)

  • do not use revision id comparisons in compaction for determining whether a revision is obsolete, but marker memory addresses this ensures revision ids don't matter when compacting documents

  • escape Unicode characters in JSON HTTP responses this converts UTF-8 characters in HTTP responses of arangod into \uXXXX escape sequences. This makes the HTTP responses fit into the 7 bit ASCII character range, which speeds up HTTP response parsing for some clients, namely node.js/v8

  • add write before read collections when starting a user transaction this allows specifying the same collection in both read and write mode without unintended side effects

  • fixed buffer overrun that occurred when building very large result sets

  • index lookup optimizations for primary index and edge index

  • fixed "collection is a nullptr" issue when starting a traversal from a transaction

  • enable /_api/import on coordinator servers

v3.0.0 (2016-06-22)

  • minor GUI fixxes

  • fix for replication and nonces

v3.0.0-rc3 (2016-06-19)

  • renamed various Foxx errors to no longer refer to Foxx services as apps

  • adjusted various error messages in Foxx to be more informative

  • specifying "files" in a Foxx manifest to be mounted at the service root no longer results in 404s when trying to access non-file routes

  • undeclared path parameters in Foxx no longer break the service

  • trusted reverse proxy support is now handled more consistently

  • ArangoDB request compatibility and user are now exposed in Foxx

  • all bundled NPM modules have been upgraded to their latest versions

v3.0.0-rc2 (2015-06-12)

  • added option --server.max-packet-size for client tools

  • renamed option --server.ssl-protocol to --ssl.protocol in client tools (was already done for arangod, but overlooked for client tools)

  • fix handling of --ssl.protocol value 5 (TLS v1.2) in client tools, which claimed to support it but didn't

v3.0.0-rc1 (2015-06-10)

  • forward ported V8 Comparator bugfix for inline heuristics from https://github.com/v8/v8/commit/5ff7901e24c2c6029114567de5a08ed0f1494c81

  • changed to-string conversion for AQL objects and arrays, used by the AQL function TO_STRING() and implicit to-string casts in AQL

    • arrays are now converted into their JSON-stringify equivalents, e.g.

      • [ ] is now converted to []
      • [ 1, 2, 3 ] is now converted to [1,2,3]
      • [ "test", 1, 2 ] is now converted to ["test",1,2]`

      Previous versions of ArangoDB converted arrays with no members into the empty string, and non-empty arrays into a comma-separated list of member values, without the surrounding angular brackets. Additionally, string array members were not enclosed in quotes in the result string:

      • [ ] was converted to ``
      • [ 1, 2, 3 ] was converted to 1,2,3
      • [ "test", 1, 2 ] was converted to test,1,2`
    • objects are now converted to their JSON-stringify equivalents, e.g.

      • { } is converted to {}
      • { a: 1, b: 2 } is converted to {"a":1,"b":2}
      • { "test" : "foobar" } is converted to {"test":"foobar"}

      Previous versions of ArangoDB always converted objects into the string [object Object]

    This change affects also the AQL functions CONCAT() and CONCAT_SEPARATOR() which treated array values differently in previous versions. Previous versions of ArangoDB automatically flattened array values on the first level of the array, e.g. CONCAT([1, 2, 3, [ 4, 5, 6 ]]) produced 1,2,3,4,5,6. Now this will produce [1,2,3,[4,5,6]]. To flatten array members on the top level, you can now use the more explicit CONCAT(FLATTEN([1, 2, 3, [4, 5, 6]], 1)).

  • added C++ implementations for AQL functions SLICE(), CONTAINS() and RANDOM_TOKEN()

  • as a consequence of the upgrade to V8 version 5, the implementation of the JavaScript Buffer object had to be changed. JavaScript Buffer objects in ArangoDB now always store their data on the heap. There is no shared pool for small Buffer values, and no pointing into existing Buffer data when extracting slices. This change may increase the cost of creating Buffers with short contents or when peeking into existing Buffers, but was required for safer memory management and to prevent leaks.

  • the db object's function _listDatabases() was renamed to just _databases() in order to make it more consistent with the existing _collections() function. Additionally the db object's _listEndpoints() function was renamed to just _endpoints().

  • changed default value of --server.authentication from false to true in configuration files etc/relative/arangod.conf and etc/arangodb/arangod.conf.in. This means the server will be started with authentication enabled by default, requiring all client connections to provide authentication data when connecting to ArangoDB. Authentication can still be turned off via setting the value of --server.authentication to false in ArangoDB's configuration files or by specifying the option on the command-line.

  • Changed result format for querying all collections via the API GET /_api/collection.

    Previous versions of ArangoDB returned an object with an attribute named collections and an attribute named names. Both contained all available collections, but collections contained the collections as an array, and names contained the collections again, contained in an object in which the attribute names were the collection names, e.g.

    {
      "collections": [
        {"id":"5874437","name":"test","isSystem":false,"status":3,"type":2},
        {"id":"17343237","name":"something","isSystem":false,"status":3,"type":2},
        ...
      ],
      "names": {
        "test": {"id":"5874437","name":"test","isSystem":false,"status":3,"type":2},
        "something": {"id":"17343237","name":"something","isSystem":false,"status":3,"type":2},
        ...
      }
    }
    

    This result structure was redundant, and therefore has been simplified to just

    {
      "result": [
        {"id":"5874437","name":"test","isSystem":false,"status":3,"type":2},
        {"id":"17343237","name":"something","isSystem":false,"status":3,"type":2},
        ...
      ]
    }
    

    in ArangoDB 3.0.

  • added AQL functions TYPENAME() and HASH()

  • renamed arangob tool to arangobench

  • added AQL string comparison operator LIKE

    The operator can be used to compare strings like this:

    value LIKE search
    

    The operator is currently implemented by calling the already existing AQL function LIKE.

    This change also makes LIKE an AQL keyword. Using LIKE in either case as an attribute or collection name in AQL thus requires quoting.

  • make AQL optimizer rule "remove-unnecessary-calculations" fire in more cases

    The rule will now remove calculations that are used exactly once in other expressions (e.g. LET a = doc RETURN a.value) and calculations, or calculations that are just references (e.g. LET a = b).

  • renamed AQL optimizer rule "merge-traversal-filter" to "optimize-traversals" Additionally, the optimizer rule will remove unused edge and path result variables from the traversal in case they are specified in the FOR section of the traversal, but not referenced later in the query. This saves constructing edges and paths results.

  • added AQL optimizer rule "inline-subqueries"

    This rule can pull out certain subqueries that are used as an operand to a FOR loop one level higher, eliminating the subquery completely. For example, the query

    FOR i IN (FOR j IN [1,2,3] RETURN j) RETURN i
    

    will be transformed by the rule to:

    FOR i IN [1,2,3] RETURN i
    

    The query

    FOR name IN (FOR doc IN _users FILTER doc.status == 1 RETURN doc.name) LIMIT 2 RETURN name
    

    will be transformed into

    FOR tmp IN _users FILTER tmp.status == 1 LIMIT 2 RETURN tmp.name
    

    The rule will only fire when the subquery is used as an operand to a FOR loop, and if the subquery does not contain a COLLECT with an INTO variable.

  • added new endpoint "srv://" for DNS service records

  • The result order of the AQL functions VALUES and ATTRIBUTES has never been guaranteed and it only had the "correct" ordering by accident when iterating over objects that were not loaded from the database. This accidental behavior is now changed by introduction of VelocyPack. No ordering is guaranteed unless you specify the sort parameter.

  • removed configure option --enable-logger

  • added AQL array comparison operators

    All AQL comparison operators now also exist in an array variant. In the array variant, the operator is preceded with one of the keywords ALL, ANY or NONE. Using one of these keywords changes the operator behavior to execute the comparison operation for all, any, or none of its left hand argument values. It is therefore expected that the left hand argument of an array operator is an array.

    Examples:

    [ 1, 2, 3 ] ALL IN [ 2, 3, 4 ]   // false
    [ 1, 2, 3 ] ALL IN [ 1, 2, 3 ]   // true
    [ 1, 2, 3 ] NONE IN [ 3 ]        // false
    [ 1, 2, 3 ] NONE IN [ 23, 42 ]   // true
    [ 1, 2, 3 ] ANY IN [ 4, 5, 6 ]   // false
    [ 1, 2, 3 ] ANY IN [ 1, 42 ]     // true
    [ 1, 2, 3 ] ANY == 2             // true
    [ 1, 2, 3 ] ANY == 4             // false
    [ 1, 2, 3 ] ANY > 0              // true
    [ 1, 2, 3 ] ANY <= 1             // true
    [ 1, 2, 3 ] NONE < 99            // false
    [ 1, 2, 3 ] NONE > 10            // true
    [ 1, 2, 3 ] ALL > 2              // false
    [ 1, 2, 3 ] ALL > 0              // true
    [ 1, 2, 3 ] ALL >= 3             // false
    ["foo", "bar"] ALL != "moo"      // true
    ["foo", "bar"] NONE == "bar"     // false
    ["foo", "bar"] ANY == "foo"      // true
    
  • improved AQL optimizer to remove unnecessary sort operations in more cases

  • allow enclosing AQL identifiers in forward ticks in addition to using backward ticks

    This allows for convenient writing of AQL queries in JavaScript template strings (which are delimited with backticks themselves), e.g.

    var q = `FOR doc IN ´collection´ RETURN doc.´name´`;
    
  • allow to set print.limitString to configure the number of characters to output before truncating

  • make logging configurable per log "topic"

    --log.level <level> sets the global log level to , e.g. info, debug, trace.

    --log.level topic=<level> sets the log level for a specific topic. Currently, the following topics exist: collector, compactor, mmap, performance, queries, and requests. performance and requests are set to FATAL by default. queries is set to info. All others are set to the global level by default.

    The new log option --log.output <definition> allows directing the global or per-topic log output to different outputs. The output definition "" can be one of

    "-" for stdin "+" for stderr "syslog://" "syslog:///" "file://"

    The option can be specified multiple times in order to configure the output for different log topics. To set up a per-topic output configuration, use --log.output <topic>=<definition>, e.g.

    queries=file://queries.txt

    logs all queries to the file "queries.txt".

  • the option --log.requests-file is now deprecated. Instead use

    --log.level requests=info --log.output requests=file://requests.txt

  • the option --log.facility is now deprecated. Instead use

    --log.output requests=syslog://facility

  • the option --log.performance is now deprecated. Instead use

    --log.level performance=trace

  • removed option --log.source-filter

  • removed configure option --enable-logger

  • change collection directory names to include a random id component at the end

    The new pattern is collection-<id>-<random>, where <id> is the collection id and <random> is a random number. Previous versions of ArangoDB used a pattern collection-<id> without the random number.

    ArangoDB 3.0 understands both the old and name directory name patterns.

  • removed mostly unused internal spin-lock implementation

  • removed support for pre-Windows 7-style locks. This removes compatibility for Windows versions older than Windows 7 (e.g. Windows Vista, Windows XP) and Windows 2008R2 (e.g. Windows 2008).

  • changed names of sub-threads started by arangod

  • added option --default-number-of-shards to arangorestore, allowing creating collections with a specifiable number of shards from a non-cluster dump

  • removed support for CoffeeScript source files

  • removed undocumented SleepAndRequeue

  • added WorkMonitor to inspect server threads

  • when downloading a Foxx service from the web interface the suggested filename is now based on the service's mount path instead of simply "app.zip"

  • the @arangodb/request response object now stores the parsed JSON response body in a property json instead of body when the request was made using the json option. The body instead contains the response body as a string.

  • the Foxx API has changed significantly, 2.8 services are still supported using a backwards-compatible "legacy mode"

v2.8.11 (XXXX-XX-XX)

  • fixed issue #1937

v2.8.10 (2016-07-01)

  • make sure next local _rev value used for a document is at least as high as the _rev value supplied by external sources such as replication

  • make adding a collection in both read- and write-mode to a transaction behave as expected (write includes read). This prevents the unregister collection used in transaction error

  • fixed sometimes invalid result for byExample(...).count() when an index plus post-filtering was used

  • fixed "collection is a nullptr" issue when starting a traversal from a transaction

  • honor the value of startup option --database.wait-for-sync (that is used to control whether new collections are created with waitForSync set to true by default) also when creating collections via the HTTP API (and thus the ArangoShell). When creating a collection via these mechanisms, the option was ignored so far, which was inconsistent.

  • fixed issue #1826: arangosh --javascript.execute: internal error (geo index issue)

  • fixed issue #1823: Arango crashed hard executing very simple query on windows

v2.8.9 (2016-05-13)

  • fixed escaping and quoting of extra parameters for executables in Mac OS X App

  • added "waiting for" status variable to web interface collection figures view

  • fixed undefined behavior in query cache invaldation

  • fixed access to /_admin/statistics API in case statistics are disable via option --server.disable-statistics

  • Foxx manager will no longer fail hard when Foxx store is unreachable unless installing a service from the Foxx store (e.g. when behind a firewall or GitHub is unreachable).

v2.8.8 (2016-04-19)

  • fixed issue #1805: Query: internal error (location: arangod/Aql/AqlValue.cpp:182). Please report this error to arangodb.com (while executing)

  • allow specifying collection name prefixes for _from and _to in arangoimp:

    To avoid specifying complete document ids (consisting of collection names and document keys) for _from and _to values when importing edges with arangoimp, there are now the options --from-collection-prefix and --to-collection-prefix.

    If specified, these values will be automatically prepended to each value in _from (or _to resp.). This allows specifying only document keys inside _from and/or _to.

    Example

    > arangoimp --from-collection-prefix users --to-collection-prefix products ...
    

    Importing the following document will then create an edge between users/1234 and products/4321:

    { "_from" : "1234", "_to" : "4321", "desc" : "users/1234 is connected to products/4321" }
  • requests made with the interactive system API documentation in the web interface (Swagger) will now respect the active database instead of always using _system

v2.8.7 (2016-04-07)

  • optimized primary=>secondary failover

  • fix to-boolean conversion for documents in AQL

  • expose the User-Agent HTTP header from the ArangoShell since Github seems to require it now, and we use the ArangoShell for fetching Foxx repositories from Github

  • work with http servers that only send

  • fixed potential race condition between compactor and collector threads

  • fix removal of temporary directories on arangosh exit

  • javadoc-style comments in Foxx services are no longer interpreted as Foxx comments outside of controller/script/exports files (#1748)

  • removed remaining references to class syntax for Foxx Model and Repository from the documentation

  • added a safe-guard for corrupted master-pointer

v2.8.6 (2016-03-23)

  • arangosh can now execute JavaScript script files that contain a shebang in the first line of the file. This allows executing script files directly.

    Provided there is a script file /path/to/script.js with the shebang #!arangosh --javascript.execute:

    > cat /path/to/script.js
    #!arangosh --javascript.execute 
    print("hello from script.js");
    

    If the script file is made executable

    > chmod a+x /path/to/script.js
    

    it can be invoked on the shell directly and use arangosh for its execution:

    > /path/to/script.js
    hello from script.js
    

    This did not work in previous versions of ArangoDB, as the whole script contents (including the shebang) were treated as JavaScript code. Now shebangs in script files will now be ignored for all files passed to arangosh's --javascript.execute parameter.

    The alternative way of executing a JavaScript file with arangosh still works:

    > arangosh --javascript.execute /path/to/script.js
    hello from script.js
    
  • added missing reset of traversal state for nested traversals. The state of nested traversals (a traversal in an AQL query that was located in a repeatedly executed subquery or inside another FOR loop) was not reset properly, so that multiple invocations of the same nested traversal with different start vertices led to the nested traversal always using the start vertex provided on the first invocation.

  • fixed issue #1781: ArangoDB startup time increased tremendously

  • fixed issue #1783: SIGHUP should rotate the log

v2.8.5 (2016-03-XX)

  • Add OpenSSL handler for TLS V1.2 as sugested by kurtkincaid in #1771

  • fixed issue #1765 (The webinterface should display the correct query time) and #1770 (Display ACTUAL query time in aardvark's AQL editor)

  • Windows: the unhandled exception handler now calls the windows logging facilities directly without locks. This fixes lockups on crashes from the logging framework.

  • improve nullptr handling in logger.

  • added new endpoint "srv://" for DNS service records

v2.8.4 (2016-03-01)

  • global modules are no longer incorrectly resolved outside the ArangoDB JavaScript directory or the Foxx service's root directory (issue #1577)

  • improved error messages from Foxx and JavaScript (issues #1564, #1565, #1744)

v2.8.3 (2016-02-22)

  • fixed AQL filter condition collapsing for deeply-nested cases, potentially enabling usage of indexes in some dedicated cases

  • added parentheses in AQL explain command output to correctly display precedence of logical and arithmetic operators

  • Foxx Model event listeners defined on the model are now correctly invoked by the Repository methods (issue #1665)

  • Deleting a Foxx service in the frontend should now always succeed even if the files no longer exist on the file system (issue #1358)

  • Routing actions loaded from the database no longer throw exceptions when trying to load other modules using "require"

v2.8.2 (2016-02-09)

  • the continuous replication applier will now prevent the master's WAL logfiles from being removed if they are still needed by the applier on the slave. This should help slaves that suffered from masters garbage collection WAL logfiles which would have been needed by the slave later.

    The initial synchronization will block removal of still needed WAL logfiles on the master for 10 minutes initially, and will extend this period when further requests are made to the master. Initial synchronization hands over its handle for blocking logfile removal to the continuous replication when started via the setupReplication function. In this case, continuous replication will extend the logfile removal blocking period for the required WAL logfiles when the slave makes additional requests.

    All handles that block logfile removal will time out automatically after at most 5 minutes should a master not be contacted by the slave anymore (e.g. in case the slave's replication is turned off, the slaves loses the connection to the master or the slave goes down).

  • added all-in-one function setupReplication to synchronize data from master to slave and start the continuous replication:

    require("@arangodb/replication").setupReplication(configuration);
    

    The command will return when the initial synchronization is finished and the continuous replication has been started, or in case the initial synchronization has failed.

    If the initial synchronization is successful, the command will store the given configuration on the slave. It also configures the continuous replication to start automatically if the slave is restarted, i.e. autoStart is set to true.

    If the command is run while the slave's replication applier is already running, it will first stop the running applier, drop its configuration and do a resynchronization of data with the master. It will then use the provided configration, overwriting any previously existing replication configuration on the slave.

    The following example demonstrates how to use the command for setting up replication for the _system database. Note that it should be run on the slave and not the master:

    db._useDatabase("_system");
    require("@arangodb/replication").setupReplication({
      endpoint: "tcp://master.domain.org:8529",
      username: "myuser",
      password: "mypasswd",
      verbose: false,
      includeSystem: false,
      incremental: true,
      autoResync: true
    });
    
  • the sync and syncCollection functions now always start the data synchronization as an asynchronous server job. The call to sync or syncCollection will block until synchronization is either complete or has failed with an error. The functions will automatically poll the slave periodically for status updates.

    The main benefit is that the connection to the slave does not need to stay open permanently and is thus not affected by timeout issues. Additionally the caller does not need to query the synchronization status from the slave manually as this is now performed automatically by these functions.

  • fixed undefined behavior when explaining some types of AQL traversals, fixed display of some types of traversals in AQL explain output

v2.8.1 (2016-01-29)

  • Improved AQL Pattern matching by allowing to specify a different traversal direction for one or many of the edge collections.

    FOR v, e, p IN OUTBOUND @start @@ec1, INBOUND @@ec2, @@ec3
    

    will traverse ec1 and ec3 in the OUTBOUND direction and for ec2 it will use the INBOUND direction. These directions can be combined in arbitrary ways, the direction defined after IN [steps] will we used as default direction and can be overriden for specific collections. This feature is only available for collection lists, it is not possible to combine it with graph names.

  • detect more types of transaction deadlocks early

  • fixed display of relational operators in traversal explain output

  • fixed undefined behavior in AQL function PARSE_IDENTIFIER

  • added "engines" field to Foxx services generated in the admin interface

  • added AQL function IS_SAME_COLLECTION:

    IS_SAME_COLLECTION(collection, document): Return true if document has the same collection id as the collection specified in collection. document can either be a document handle string, or a document with an _id attribute. The function does not validate whether the collection actually contains the specified document, but only compares the name of the specified collection with the collection name part of the specified document. If document is neither an object with an id attribute nor a string value, the function will return null and raise a warning.

    /* true */
    IS_SAME_COLLECTION('_users', '_users/my-user')
    IS_SAME_COLLECTION('_users', { _id: '_users/my-user' })
    
    /* false */
    IS_SAME_COLLECTION('_users', 'foobar/baz')
    IS_SAME_COLLECTION('_users', { _id: 'something/else' })
    

v2.8.0 (2016-01-25)

  • avoid recursive locking

v2.8.0-beta8 (2016-01-19)

  • improved internal datafile statistics for compaction and compaction triggering conditions, preventing excessive growth of collection datafiles under some workloads. This should also fix issue #1596.

  • renamed AQL optimizer rule remove-collect-into to remove-collect-variables

  • fixed primary and edge index lookups prematurely aborting searches when the specified id search value contained a different collection than the collection the index was created for

v2.8.0-beta7 (2016-01-06)

  • added vm.runInThisContext

  • added AQL keyword AGGREGATE for use in AQL COLLECT statement

    Using AGGREGATE allows more efficient aggregation (incrementally while building the groups) than previous versions of AQL, which built group aggregates afterwards from the total of all group values.

    AGGREGATE can be used inside a COLLECT statement only. If used, it must follow the declaration of grouping keys:

    FOR doc IN collection
      COLLECT gender = doc.gender AGGREGATE minAge = MIN(doc.age), maxAge = MAX(doc.age)
      RETURN { gender, minAge, maxAge }
    

    or, if no grouping keys are used, it can follow the COLLECT keyword:

    FOR doc IN collection
      COLLECT AGGREGATE minAge = MIN(doc.age), maxAge = MAX(doc.age)
      RETURN {
    

    minAge, maxAge }

    Only specific expressions are allowed on the right-hand side of each AGGREGATE assignment:

    • on the top level the expression must be a call to one of the supported aggregation functions LENGTH, MIN, MAX, SUM, AVERAGE, STDDEV_POPULATION, STDDEV_SAMPLE, VARIANCE_POPULATION, or VARIANCE_SAMPLE

    • the expression must not refer to variables introduced in the COLLECT itself

  • Foxx: mocha test paths with wildcard characters (asterisks) now work on Windows

  • reserved AQL keyword NONE for future use

  • web interface: fixed a graph display bug concerning dashboard view

  • web interface: fixed several bugs during the dashboard initialize process

  • web interface: included several bugfixes: #1597, #1611, #1623

  • AQL query optimizer now converts LENGTH(collection-name) to an optimized expression that returns the number of documents in a collection

  • adjusted the behavior of the expansion ([*]) operator in AQL for non-array values

    In ArangoDB 2.8, calling the expansion operator on a non-array value will always return an empty array. Previous versions of ArangoDB expanded non-array values by calling the TO_ARRAY() function for the value, which for example returned an array with a single value for boolean, numeric and string input values, and an array with the object's values for an object input value. This behavior was inconsistent with how the expansion operator works for the array indexes in 2.8, so the behavior is now unified:

    • if the left-hand side operand of [*] is an array, the array will be returned as is when calling [*] on it
    • if the left-hand side operand of [*] is not an array, an empty array will be returned by [*]

    AQL queries that rely on the old behavior can be changed by either calling TO_ARRAY explicitly or by using the [*] at the correct position.

    The following example query will change its result in 2.8 compared to 2.7:

    LET values = "foo" RETURN values[*]
    

    In 2.7 the query has returned the array [ "foo" ], but in 2.8 it will return an empty array [ ]. To make it return the array [ "foo" ] again, an explicit TO_ARRAY function call is needed in 2.8 (which in this case allows the removal of the [*] operator altogether). This also works in 2.7:

    LET values = "foo" RETURN TO_ARRAY(values)
    

    Another example:

    LET values = [ { name: "foo" }, { name: "bar" } ]
    RETURN values[*].name[*]
    

    The above returned [ [ "foo" ], [ "bar" ] ] in 2.7. In 2.8 it will return [ [ ], [ ] ], because the value of name` is not an array. To change the results to the 2.7 style, the query can be changed to

    LET values = [ { name: "foo" }, { name: "bar" } ]
    RETURN values[* RETURN TO_ARRAY(CURRENT.name)]
    

    The above also works in 2.7. The following types of queries won't change:

    LET values = [ 1, 2, 3 ] RETURN values[*]
    LET values = [ { name: "foo" }, { name: "bar" } ] RETURN values[*].name
    LET values = [ { names: [ "foo", "bar" ] }, { names: [ "baz" ] } ] RETURN values[*].names[*]
    LET values = [ { names: [ "foo", "bar" ] }, { names: [ "baz" ] } ] RETURN values[*].names[**]
    
  • slightly adjusted V8 garbage collection strategy so that collection eventually happens in all contexts that hold V8 external references to documents and collections.

    also adjusted default value of --javascript.gc-frequency from 10 seconds to 15 seconds, as less internal operations are carried out in JavaScript.

  • fixes for AQL optimizer and traversal

  • added --create-collection-type option to arangoimp

    This allows specifying the type of the collection to be created when --create-collection is set to true.

v2.8.0-beta2 (2015-12-16)

  • added AQL query optimizer rule "sort-in-values"

    This rule pre-sorts the right-hand side operand of the IN and NOT IN operators so the operation can use a binary search with logarithmic complexity instead of a linear search. The rule is applied when the right-hand side operand of an IN or NOT IN operator in a filter condition is a variable that is defined in a different loop/scope than the operator itself. Additionally, the filter condition must consist of solely the IN or NOT IN operation in order to avoid any side-effects.

  • changed collection status terminology in web interface for collections for which an unload request has been issued from in the process of being unloaded to will be unloaded.

  • unloading a collection via the web interface will now trigger garbage collection in all v8 contexts and force a WAL flush. This increases the chances of perfoming the unload faster.

  • added the following attributes to the result of collection.figures() and the corresponding HTTP API at PUT /_api/collection/<name>/figures:

    • documentReferences: The number of references to documents in datafiles that JavaScript code currently holds. This information can be used for debugging compaction and unload issues.
    • waitingFor: An optional string value that contains information about which object type is at the head of the collection's cleanup queue. This information can be used for debugging compaction and unload issues.
    • compactionStatus.time: The point in time the compaction for the collection was last executed. This information can be used for debugging compaction issues.
    • compactionStatus.message: The action that was performed when the compaction was last run for the collection. This information can be used for debugging compaction issues.

    Note: waitingFor and compactionStatus may be empty when called on a coordinator in a cluster.

  • the compaction will now provide queryable status info that can be used to track its progress. The compaction status is displayed in the web interface, too.

  • better error reporting for arangodump and arangorestore

  • arangodump will now fail by default when trying to dump edges that refer to already dropped collections. This can be circumvented by specifying the option --force true when invoking arangodump

  • fixed cluster upgrade procedure

  • the AQL functions NEAR and WITHIN now have stricter validations for their input parameters limit, radius and distance. They may now throw exceptions when invalid parameters are passed that may have not led to exceptions in previous versions.

  • deprecation warnings now log stack traces

  • Foxx: improved backwards compatibility with 2.5 and 2.6

    • reverted Model and Repository back to non-ES6 "classes" because of compatibility issues when using the extend method with a constructor

    • removed deprecation warnings for extend and controller.del

    • restored deprecated method Model.toJSONSchema

    • restored deprecated type, jwt and sessionStorageApp options in Controller#activateSessions

v2.8.0-beta1 (2015-12-06)

  • added AQL function IS_DATESTRING(value)

    Returns true if value is a string that can be used in a date function. This includes partial dates such as 2015 or 2015-10 and strings containing invalid dates such as 2015-02-31. The function will return false for all non-string values, even if some of them may be usable in date functions.

v2.8.0-alpha1 (2015-12-03)

  • added AQL keywords GRAPH, OUTBOUND, INBOUND and ANY for use in graph traversals, reserved AQL keyword ALL for future use

    Usage of these keywords as collection names, variable names or attribute names in AQL queries will not be possible without quoting. For example, the following AQL query will still work as it uses a quoted collection name and a quoted attribute name:

    FOR doc IN `OUTBOUND`
      RETURN doc.`any`
    
  • issue #1593: added AQL POW function for exponentation

  • added cluster execution site info in explain output for AQL queries

  • replication improvements:

    • added autoResync configuration parameter for continuous replication.

      When set to true, a replication slave will automatically trigger a full data re-synchronization with the master when the master cannot provide the log data the slave had asked for. Note that autoResync will only work when the option requireFromPresent is also set to true for the continuous replication, or when the continuous syncer is started and detects that no start tick is present.

      Automatic re-synchronization may transfer a lot of data from the master to the slave and may be expensive. It is therefore turned off by default. When turned off, the slave will never perform an automatic re-synchronization with the master.

    • added idleMinWaitTime and idleMaxWaitTime configuration parameters for continuous replication.

      These parameters can be used to control the minimum and maximum wait time the slave will (intentionally) idle and not poll for master log changes in case the master had sent the full logs already. The idleMaxWaitTime value will only be used when adapativePolling is set to true. When adaptivePolling is disable, only idleMinWaitTime will be used as a constant time span in which the slave will not poll the master for further changes. The default values are 0.5 seconds for idleMinWaitTime and 2.5 seconds for idleMaxWaitTime, which correspond to the hard-coded values used in previous versions of ArangoDB.

    • added initialSyncMaxWaitTime configuration parameter for initial and continuous replication

      This option controls the maximum wait time (in seconds) that the initial synchronization will wait for a response from the master when fetching initial collection data. If no response is received within this time period, the initial synchronization will give up and fail. This option is also relevant for continuous replication in case autoResync is set to true, as then the continuous replication may trigger a full data re-synchronization in case the master cannot the log data the slave had asked for.

    • HTTP requests sent from the slave to the master during initial synchronization will now be retried if they fail with connection problems.

    • the initial synchronization now logs its progress so it can be queried using the regular replication status check APIs.

    • added async attribute for sync and syncCollection operations called from the ArangoShell. Setthing this attribute to true will make the synchronization job on the server go into the background, so that the shell does not block. The status of the started asynchronous synchronization job can be queried from the ArangoShell like this:

      /* starts initial synchronization */
      var replication = require("@arangodb/replication");
      var id = replication.sync({
        endpoint: "tcp://master.domain.org:8529",
        username: "myuser",
        password: "mypasswd",
        async: true
      

      });

      /* now query the id of the returned async job and print the status */ print(replication.getSyncResult(id));

      The result of getSyncResult() will be false while the server-side job has not completed, and different to false if it has completed. When it has completed, all job result details will be returned by the call to getSyncResult().

  • fixed non-deterministic query results in some cluster queries

  • fixed issue #1589

  • return HTTP status code 410 (gone) instead of HTTP 408 (request timeout) for server-side operations that are canceled / killed. Sending 410 instead of 408 prevents clients from re-starting the same (canceled) operation. Google Chrome for example sends the HTTP request again in case it is responded with an HTTP 408, and this is exactly the opposite of the desired behavior when an operation is canceled / killed by the user.

  • web interface: queries in AQL editor now cancelable

  • web interface: dashboard - added replication information

  • web interface: AQL editor now supports bind parameters

  • added startup option --server.hide-product-header to make the server not send the HTTP response header "Server: ArangoDB" in its HTTP responses. By default, the option is turned off so the header is still sent as usual.

  • added new AQL function UNSET_RECURSIVE to recursively unset attritutes from objects/documents

  • switched command-line editor in ArangoShell and arangod to linenoise-ng

  • added automatic deadlock detection for transactions

    In case a deadlock is detected, a multi-collection operation may be rolled back automatically and fail with error 29 (deadlock detected). Client code for operations containing more than one collection should be aware of this potential error and handle it accordingly, either by giving up or retrying the transaction.

  • Added C++ implementations for the AQL arithmetic operations and the following AQL functions:

    • ABS
    • APPEND
    • COLLECTIONS
    • CURRENT_DATABASE
    • DOCUMENT
    • EDGES
    • FIRST
    • FIRST_DOCUMENT
    • FIRST_LIST
    • FLATTEN
    • FLOOR
    • FULLTEXT
    • LAST
    • MEDIAN
    • MERGE_RECURSIVE
    • MINUS
    • NEAR
    • NOT_NULL
    • NTH
    • PARSE_IDENTIFIER
    • PERCENTILE
    • POP
    • POSITION
    • PUSH
    • RAND
    • RANGE
    • REMOVE_NTH
    • REMOVE_VALUE
    • REMOVE_VALUES
    • ROUND
    • SHIFT
    • SQRT
    • STDDEV_POPULATION
    • STDDEV_SAMPLE
    • UNSHIFT
    • VARIANCE_POPULATION
    • VARIANCE_SAMPLE
    • WITHIN
    • ZIP
  • improved performance of skipping over many documents in an AQL query when no indexes and no filters are used, e.g.

    FOR doc IN collection
      LIMIT 1000000, 10
      RETURN doc
    
  • Added array indexes

    Hash indexes and skiplist indexes can now optionally be defined for array values so they index individual array members.

    To define an index for array values, the attribute name is extended with the expansion operator [*] in the index definition:

    arangosh> db.colName.ensureHashIndex("tags[*]");
    

    When given the following document

    { tags: [ "AQL", "ArangoDB", "Index" ] }
    

    the index will now contain the individual values "AQL", "ArangoDB" and "Index".

    Now the index can be used for finding all documents having "ArangoDB" somewhere in their tags array using the following AQL query:

    FOR doc IN colName
      FILTER "ArangoDB" IN doc.tags[*]
      RETURN doc
    
  • rewrote AQL query optimizer rule use-index-range and renamed it to use-indexes. The name change affects rule names in the optimizer's output.

  • rewrote AQL execution node IndexRangeNode and renamed it to IndexNode. The name change affects node names in the optimizer's explain output.

  • added convenience function db._explain(query) for human-readable explanation of AQL queries

  • module resolution as used by require now behaves more like in node.js

  • the org/arangodb/request module now returns response bodies for error responses by default. The old behavior of not returning bodies for error responses can be re-enabled by explicitly setting the option returnBodyOnError to false (#1437)

v2.7.6 (2016-01-30)

  • detect more types of transaction deadlocks early

v2.7.5 (2016-01-22)

  • backported added automatic deadlock detection for transactions

    In case a deadlock is detected, a multi-collection operation may be rolled back automatically and fail with error 29 (deadlock detected). Client code for operations containing more than one collection should be aware of this potential error and handle it accordingly, either by giving up or retrying the transaction.

  • improved internal datafile statistics for compaction and compaction triggering conditions, preventing excessive growth of collection datafiles under some workloads. This should also fix issue #1596.

  • Foxx export cache should no longer break if a broken app is loaded in the web admin interface.

  • Foxx: removed some incorrect deprecation warnings.

  • Foxx: mocha test paths with wildcard characters (asterisks) now work on Windows

v2.7.4 (2015-12-21)

  • slightly adjusted V8 garbage collection strategy so that collection eventually happens in all contexts that hold V8 external references to documents and collections.

  • added the following attributes to the result of collection.figures() and the corresponding HTTP API at PUT /_api/collection/<name>/figures:

    • documentReferences: The number of references to documents in datafiles that JavaScript code currently holds. This information can be used for debugging compaction and unload issues.
    • waitingFor: An optional string value that contains information about which object type is at the head of the collection's cleanup queue. This information can be used for debugging compaction and unload issues.
    • compactionStatus.time: The point in time the compaction for the collection was last executed. This information can be used for debugging compaction issues.
    • compactionStatus.message: The action that was performed when the compaction was last run for the collection. This information can be used for debugging compaction issues.

    Note: waitingFor and compactionStatus may be empty when called on a coordinator in a cluster.

  • the compaction will now provide queryable status info that can be used to track its progress. The compaction status is displayed in the web interface, too.

v2.7.3 (2015-12-17)

  • fixed some replication value conversion issues when replication applier properties were set via ArangoShell

  • fixed disappearing of documents for collections transferred via sync or syncCollection if the collection was dropped right before synchronization and drop and (re-)create collection markers were located in the same WAL file

  • fixed an issue where overwriting the system sessions collection would break the web interface when authentication is enabled

v2.7.2 (2015-12-01)

  • replication improvements:

    • added autoResync configuration parameter for continuous replication.

      When set to true, a replication slave will automatically trigger a full data re-synchronization with the master when the master cannot provide the log data the slave had asked for. Note that autoResync will only work when the option requireFromPresent is also set to true for the continuous replication, or when the continuous syncer is started and detects that no start tick is present.

      Automatic re-synchronization may transfer a lot of data from the master to the slave and may be expensive. It is therefore turned off by default. When turned off, the slave will never perform an automatic re-synchronization with the master.

    • added idleMinWaitTime and idleMaxWaitTime configuration parameters for continuous replication.

      These parameters can be used to control the minimum and maximum wait time the slave will (intentionally) idle and not poll for master log changes in case the master had sent the full logs already. The idleMaxWaitTime value will only be used when adapativePolling is set to true. When adaptivePolling is disable, only idleMinWaitTime will be used as a constant time span in which the slave will not poll the master for further changes. The default values are 0.5 seconds for idleMinWaitTime and 2.5 seconds for idleMaxWaitTime, which correspond to the hard-coded values used in previous versions of ArangoDB.

    • added initialSyncMaxWaitTime configuration parameter for initial and continuous replication

      This option controls the maximum wait time (in seconds) that the initial synchronization will wait for a response from the master when fetching initial collection data. If no response is received within this time period, the initial synchronization will give up and fail. This option is also relevant for continuous replication in case autoResync is set to true, as then the continuous replication may trigger a full data re-synchronization in case the master cannot the log data the slave had asked for.

    • HTTP requests sent from the slave to the master during initial synchronization will now be retried if they fail with connection problems.

    • the initial synchronization now logs its progress so it can be queried using the regular replication status check APIs.

  • fixed non-deterministic query results in some cluster queries

  • added missing lock instruction for primary index in compactor size calculation

  • fixed issue #1589

  • fixed issue #1583

  • fixed undefined behavior when accessing the top level of a document with the [*] operator

  • fixed potentially invalid pointer access in shaper when the currently accessed document got re-located by the WAL collector at the very same time

  • Foxx: optional configuration options no longer log validation errors when assigned empty values (#1495)

  • Foxx: constructors provided to Repository and Model sub-classes via extend are now correctly called (#1592)

v2.7.1 (2015-11-07)

  • switch to linenoise next generation

  • exclude _apps collection from replication

    The slave has its own _apps collection which it populates on server start. When replicating data from the master to the slave, the data from the master may clash with the slave's own data in the _apps collection. Excluding the _apps collection from replication avoids this.

  • disable replication appliers when starting in modes --upgrade, --no-server and --check-upgrade

  • more detailed output in arango-dfdb

  • fixed "no start tick" issue in replication applier

    This error could occur after restarting a slave server after a shutdown when no data was ever transferred from the master to the slave via the continuous replication

  • fixed problem during SSL client connection abort that led to scheduler thread staying at 100% CPU saturation

  • fixed potential segfault in AQL NEIGHBORS function implementation when C++ function variant was used and collection names were passed as strings

  • removed duplicate target for some frontend JavaScript files from the Makefile

  • make AQL function MERGE() work on a single array parameter, too. This allows combining the attributes of multiple objects from an array into a single object, e.g.

    RETURN MERGE([
      { foo: 'bar' },
      { quux: 'quetzalcoatl', ruled: true },
      { bar: 'baz', foo: 'done' }
    ])
    

    will now return:

    {
      "foo": "done",
      "quux": "quetzalcoatl",
      "ruled": true,
      "bar": "baz"
    }
    
  • fixed potential deadlock in collection status changing on Windows

  • fixed hard-coded incremental parameter in shell implementation of syncCollection function in replication module

  • fix for GCC5: added check for '-stdlib' option

v2.7.0 (2015-10-09)

  • fixed request statistics aggregation When arangod was started in supervisor mode, the request statistics always showed 0 requests, as the statistics aggregation thread did not run then.

  • read server configuration files before dropping privileges. this ensures that the SSL keyfile specified in the configuration can be read with the server's start privileges (i.e. root when using a standard ArangoDB package).

  • fixed replication with a 2.6 replication configuration and issues with a 2.6 master

  • raised default value of --server.descriptors-minimum to 1024

  • allow Foxx apps to be installed underneath URL path /_open/, so they can be (intentionally) accessed without authentication.

  • added allowImplicit sub-attribute in collections declaration of transactions. The allowImplicit attributes allows making transactions fail should they read-access a collection that was not explicitly declared in the collections array of the transaction.

  • added "special" password ARANGODB_DEFAULT_ROOT_PASSWORD. If you pass ARANGODB_DEFAULT_ROOT_PASSWORD as password, it will read the password from the environment variable ARANGODB_DEFAULT_ROOT_PASSWORD

v2.7.0-rc2 (2015-09-22)

  • fix over-eager datafile compaction

    This should reduce the need to compact directly after loading a collection when a collection datafile contained many insertions and updates for the same documents. It should also prevent from re-compacting already merged datafiles in case not many changes were made. Compaction will also make fewer index lookups than before.

  • added syncCollection() function in module org/arangodb/replication

    This allows synchronizing the data of a single collection from a master to a slave server. Synchronization can either restore the whole collection by transferring all documents from the master to the slave, or incrementally by only transferring documents that differ. This is done by partitioning the collection's entire key space into smaller chunks and comparing the data chunk-wise between master and slave. Only chunks that are different will be re-transferred.

    The syncCollection() function can be used as follows:

    require("org/arangodb/replication").syncCollection(collectionName, options);
    

    e.g.

    require("org/arangodb/replication").syncCollection("myCollection", {
      endpoint: "tcp://127.0.0.1:8529",  /* master */
      username: "root",                  /* username for master */
      password: "secret",                /* password for master */
      incremental: true                  /* use incremental mode */
    });
    
  • additionally allow the following characters in document keys:

    ( ) + , = ; $ ! * ' %

v2.7.0-rc1 (2015-09-17)

  • removed undocumented server-side-only collection functions:

    • collection.OFFSET()
    • collection.NTH()
    • collection.NTH2()
    • collection.NTH3()
  • upgraded Swagger to version 2.0 for the Documentation

    This gives the user better prepared test request structures. More conversions will follow so finally client libraries can be auto-generated.

  • added extra AQL functions for date and time calculation and manipulation. These functions were contributed by GitHub users @CoDEmanX and @friday. A big thanks for their work!

    The following extra date functions are available from 2.7 on:

    • DATE_DAYOFYEAR(date): Returns the day of year number of date. The return values range from 1 to 365, or 366 in a leap year respectively.

    • DATE_ISOWEEK(date): Returns the ISO week date of date. The return values range from 1 to 53. Monday is considered the first day of the week. There are no fractional weeks, thus the last days in December may belong to the first week of the next year, and the first days in January may be part of the previous year's last week.

    • DATE_LEAPYEAR(date): Returns whether the year of date is a leap year.

    • DATE_QUARTER(date): Returns the quarter of the given date (1-based):

      • 1: January, February, March
      • 2: April, May, June
      • 3: July, August, September
      • 4: October, November, December
    • DATE_DAYS_IN_MONTH(date): Returns the number of days in date's month (28..31).
    • DATE_ADD(date, amount, unit): Adds amount given in unit to date and returns the calculated date.

      unit can be either of the following to specify the time unit to add or subtract (case-insensitive):

      • y, year, years
      • m, month, months
      • w, week, weeks
      • d, day, days
      • h, hour, hours
      • i, minute, minutes
      • s, second, seconds
      • f, millisecond, milliseconds

      amount is the number of units to add (positive value) or subtract (negative value).

    • DATE_SUBTRACT(date, amount, unit): Subtracts amount given in unit from date and returns the calculated date.

      It works the same as DATE_ADD(), except that it subtracts. It is equivalent to calling DATE_ADD() with a negative amount, except that DATE_SUBTRACT() can also subtract ISO durations. Note that negative ISO durations are not supported (i.e. starting with -P, like -P1Y).

    • DATE_DIFF(date1, date2, unit, asFloat): Calculate the difference between two dates in given time unit, optionally with decimal places. Returns a negative value if date1 is greater than date2.

    • DATE_COMPARE(date1, date2, unitRangeStart, unitRangeEnd): Compare two partial dates and return true if they match, false otherwise. The parts to compare are defined by a range of time units.

      The full range is: years, months, days, hours, minutes, seconds, milliseconds. Pass the unit to start from as unitRangeStart, and the unit to end with as unitRangeEnd. All units in between will be compared. Leave out unitRangeEnd to only compare unitRangeStart.

    • DATE_FORMAT(date, format): Format a date according to the given format string. It supports the following placeholders (case-insensitive):

      • %t: timestamp, in milliseconds since midnight 1970-01-01
      • %z: ISO date (0000-00-00T00:00:00.000Z)
      • %w: day of week (0..6)
      • %y: year (0..9999)
      • %yy: year (00..99), abbreviated (last two digits)
      • %yyyy: year (0000..9999), padded to length of 4
      • %yyyyyy: year (-009999 .. +009999), with sign prefix and padded to length of 6
      • %m: month (1..12)
      • %mm: month (01..12), padded to length of 2
      • %d: day (1..31)
      • %dd: day (01..31), padded to length of 2
      • %h: hour (0..23)
      • %hh: hour (00..23), padded to length of 2
      • %i: minute (0..59)
      • %ii: minute (00..59), padded to length of 2
      • %s: second (0..59)
      • %ss: second (00..59), padded to length of 2
      • %f: millisecond (0..999)
      • %fff: millisecond (000..999), padded to length of 3
      • %x: day of year (1..366)
      • %xxx: day of year (001..366), padded to length of 3
      • %k: ISO week date (1..53)
      • %kk: ISO week date (01..53), padded to length of 2
      • %l: leap year (0 or 1)
      • %q: quarter (1..4)
      • %a: days in month (28..31)
      • %mmm: abbreviated English name of month (Jan..Dec)
      • %mmmm: English name of month (January..December)
      • %www: abbreviated English name of weekday (Sun..Sat)
      • %wwww: English name of weekday (Sunday..Saturday)
      • %&: special escape sequence for rare occasions
      • %%: literal %
      • %: ignored
  • new WAL logfiles and datafiles are now created non-sparse

    This prevents SIGBUS signals being raised when memory of a sparse datafile is accessed and the disk is full and the accessed file part is not actually disk-backed. In this case the mapped memory region is not necessarily backed by physical memory, and accessing the memory may raise SIGBUS and crash arangod.

  • the internal.download() function and the module org/arangodb/request used some internal library function that handled the sending of HTTP requests from inside of ArangoDB. This library unconditionally set an HTTP header Accept-Encoding: gzip in all outgoing HTTP requests.

    This has been fixed in 2.7, so Accept-Encoding: gzip is not set automatically anymore. Additionally, the header User-Agent: ArangoDB is not set automatically either. If client applications desire to send these headers, they are free to add it when constructing the requests using the download function or the request module.

  • fixed issue #1436: org/arangodb/request advertises deflate without supporting it

  • added template string generator function aqlQuery for generating AQL queries

    This can be used to generate safe AQL queries with JavaScript parameter variables or expressions easily:

    var name = 'test';
    var attributeName = '_key';
    var query = aqlQuery`FOR u IN users FILTER u.name == ${name} RETURN u.${attributeName}`;
    db._query(query);
    
  • report memory usage for document header data (revision id, pointer to data etc.) in db.collection.figures(). The memory used for document headers will now show up in the already existing attribute indexes.size. Due to that, the index sizes reported by figures() in 2.7 will be higher than those reported by 2.6, but the 2.7 values are more accurate.

  • IMPORTANT CHANGE: the filenames in dumps created by arangodump now contain not only the name of the dumped collection, but also an additional 32-digit hash value. This is done to prevent overwriting dump files in case-insensitive file systems when there exist multiple collections with the same name (but with different cases).

    For example, if a database has two collections: test and Test, previous versions of ArangoDB created the files

    • test.structure.json and test.data.json for collection test
    • Test.structure.json and Test.data.json for collection Test

    This did not work for case-insensitive filesystems, because the files for the second collection would have overwritten the files of the first. arangodump in 2.7 will create the following filenames instead:

    • test_098f6bcd4621d373cade4e832627b4f6.structure.json and test_098f6bcd4621d373cade4e832627b4f6.data.json
    • Test_0cbc6611f5540bd0809a388dc95a615b.structure.json and Test_0cbc6611f5540bd0809a388dc95a615b.data.json

    These filenames will be unambiguous even in case-insensitive filesystems.

  • IMPORTANT CHANGE: make arangod actually close lingering client connections when idle for at least the duration specified via --server.keep-alive-timeout. In previous versions of ArangoDB, connections were not closed by the server when the timeout was reached and the client was still connected. Now the connection is properly closed by the server in case of timeout. Client applications relying on the old behavior may now need to reconnect to the server when their idle connections time out and get closed (note: connections being idle for a long time may be closed by the OS or firewalls anyway - client applications should be aware of that and try to reconnect).

  • IMPORTANT CHANGE: when starting arangod, the server will drop the process privileges to the specified values in options --server.uid and --server.gid instantly after parsing the startup options.

    That means when either --server.uid or --server.gid are set, the privilege change will happen earlier. This may prevent binding the server to an endpoint with a port number lower than 1024 if the arangodb user has no privileges for that. Previous versions of ArangoDB changed the privileges later, so some startup actions were still carried out under the invoking user (i.e. likely root when started via init.d or system scripts) and especially binding to low port numbers was still possible there.

    The default privileges for user arangodb will not be sufficient for binding to port numbers lower than 1024. To have an ArangoDB 2.7 bind to a port number lower than 1024, it needs to be started with either a different privileged user, or the privileges of the arangodb user have to raised manually beforehand.

  • added AQL optimizer rule patch-update-statements

  • Linux startup scripts and systemd configuration for arangod now try to adjust the NOFILE (number of open files) limits for the process. The limit value is set to 131072 (128k) when ArangoDB is started via start/stop commands

  • When ArangoDB is started/stopped manually via the start/stop commands, the main process will wait for up to 10 seconds after it forks the supervisor and arangod child processes. If the startup fails within that period, the start/stop script will fail with an exit code other than zero. If the startup of the supervisor or arangod is still ongoing after 10 seconds, the main program will still return with exit code 0. The limit of 10 seconds is arbitrary because the time required for a startup is not known in advance.

  • added startup option --database.throw-collection-not-loaded-error

    Accessing a not-yet loaded collection will automatically load a collection on first access. This flag controls what happens in case an operation would need to wait for another thread to finalize loading a collection. If set to true, then the first operation that accesses an unloaded collection will load it. Further threads that try to access the same collection while it is still loading immediately fail with an error (1238, collection not loaded). This is to prevent all server threads from being blocked while waiting on the same collection to finish loading. When the first thread has completed loading the collection, the collection becomes regularly available, and all operations from that point on can be carried out normally, and error 1238 will not be thrown anymore for that collection.

    If set to false, the first thread that accesses a not-yet loaded collection will still load it. Other threads that try to access the collection while loading will not fail with error 1238 but instead block until the collection is fully loaded. This configuration might lead to all server threads being blocked because they are all waiting for the same collection to complete loading. Setting the option to true will prevent this from happening, but requires clients to catch error 1238 and react on it (maybe by scheduling a retry for later).

    The default value is false.

  • added better control-C support in arangosh

    When CTRL-C is pressed in arangosh, it will now print a ^C first. Pressing CTRL-C again will reset the prompt if something was entered before, or quit arangosh if no command was entered directly before.

    This affects the arangosh version build with Readline-support only (Linux and MacOS).

    The MacOS version of ArangoDB for Homebrew now depends on Readline, too. The Homebrew formula has been changed accordingly. When self-compiling ArangoDB on MacOS without Homebrew, Readline now is a prerequisite.

  • increased default value for collection-specific indexBuckets value from 1 to 8

    Collections created from 2.7 on will use the new default value of 8 if not overridden on collection creation or later using collection.properties({ indexBuckets: ... }).

    The indexBuckets value determines the number of buckets to use for indexes of type primary, hash and edge. Having multiple index buckets allows splitting an index into smaller components, which can be filled in parallel when a collection is loading. Additionally, resizing and reallocation of indexes are faster and less intrusive if the index uses multiple buckets, because resize and reallocation will affect only data in a single bucket instead of all index values.

    The index buckets will be filled in parallel when loading a collection if the collection has an indexBuckets value greater than 1 and the collection contains a significant amount of documents/edges (the current threshold is 256K documents but this value may change in future versions of ArangoDB).

  • changed HTTP client to use poll instead of select on Linux and MacOS

    This affects the ArangoShell and user-defined JavaScript code running inside arangod that initiates its own HTTP calls.

    Using poll instead of select allows using arbitrary high file descriptors (bigger than the compiled in FD_SETSIZE). Server connections are still handled using epoll, which has never been affected by FD_SETSIZE.

  • implemented AQL LIKE function using ICU regexes

  • added RETURN DISTINCT for AQL queries to return unique results:

    FOR doc IN collection
      RETURN DISTINCT doc.status
    

    This change also introduces DISTINCT as an AQL keyword.

  • removed createNamedQueue() and addJob() functions from org/arangodb/tasks

  • use less locks and more atomic variables in the internal dispatcher and V8 context handling implementations. This leads to improved throughput in some ArangoDB internals and allows for higher HTTP request throughput for many operations.

    A short overview of the improvements can be found here:

    https://www.arangodb.com/2015/08/throughput-enhancements/

  • added shorthand notation for attribute names in AQL object literals:

    LET name = "Peter"
    LET age = 42
    RETURN { name, age }
    

    The above is the shorthand equivalent of the generic form

    LET name = "Peter"
    LET age = 42
    RETURN { name : name, age : age }
    
  • removed configure option --enable-timings

    This option did not have any effect.

  • removed configure option --enable-figures

    This option previously controlled whether HTTP request statistics code was compiled into ArangoDB or not. The previous default value was true so statistics code was available in official packages. Setting the option to false led to compile errors so it is doubtful the default value was ever changed. By removing the option some internal statistics code was also simplified.

  • removed run-time manipulation methods for server endpoints:

    • db._removeEndpoint()
    • db._configureEndpoint()
    • HTTP POST /_api/endpoint
    • HTTP DELETE /_api/endpoint
  • AQL query result cache

    The query result cache can optionally cache the complete results of all or selected AQL queries. It can be operated in the following modes:

    • off: the cache is disabled. No query results will be stored
    • on: the cache will store the results of all AQL queries unless their cache attribute flag is set to false
    • demand: the cache will store the results of AQL queries that have their cache attribute set to true, but will ignore all others

    The mode can be set at server startup using the --database.query-cache-mode configuration option and later changed at runtime.

    The following HTTP REST APIs have been added for controlling the query cache:

    • HTTP GET /_api/query-cache/properties: returns the global query cache configuration
    • HTTP PUT /_api/query-cache/properties: modifies the global query cache configuration
    • HTTP DELETE /_api/query-cache: invalidates all results in the query cache

    The following JavaScript functions have been added for controlling the query cache:

    • require("org/arangodb/aql/cache").properties(): returns the global query cache configuration
    • require("org/arangodb/aql/cache").properties(properties): modifies the global query cache configuration
    • require("org/arangodb/aql/cache").clear(): invalidates all results in the query cache
  • do not link arangoimp against V8

  • AQL function call arguments optimization

    This will lead to arguments in function calls inside AQL queries not being copied but passed by reference. This may speed up calls to functions with bigger argument values or queries that call functions a lot of times.

  • upgraded V8 version to 4.3.61

  • removed deprecated AQL SKIPLIST function.

    This function was introduced in older versions of ArangoDB with a less powerful query optimizer to retrieve data from a skiplist index using a LIMIT clause. It was marked as deprecated in ArangoDB 2.6.

    Since ArangoDB 2.3 the behavior of the SKIPLIST function can be emulated using regular AQL constructs, e.g.

    FOR doc IN @@collection
      FILTER doc.value >= @value
      SORT doc.value DESC
      LIMIT 1
      RETURN doc
    
  • the skip() function for simple queries does not accept negative input any longer. This feature was deprecated in 2.6.0.

  • fix exception handling

    In some cases JavaScript exceptions would re-throw without information of the original problem. Now the original exception is logged for failure analysis.

  • based REST API method PUT /_api/simple/all on the cursor API and make it use AQL internally.

    The change speeds up this REST API method and will lead to additional query information being returned by the REST API. Clients can use this extra information or ignore it.

  • Foxx Queue job success/failure handlers arguments have changed from (jobId, jobData, result, jobFailures) to (result, jobData, job).

  • added Foxx Queue job options repeatTimes, repeatUntil and repeatDelay to automatically re-schedule jobs when they are completed.

  • added Foxx manifest configuration type password to mask values in the web interface.

  • fixed default values in Foxx manifest configurations sometimes not being used as defaults.

  • fixed optional parameters in Foxx manifest configurations sometimes not being cleared correctly.

  • Foxx dependencies can now be marked as optional using a slightly more verbose syntax in your manifest file.

  • converted Foxx constructors to ES6 classes so you can extend them using class syntax.

  • updated aqb to 2.0.

  • updated chai to 3.0.

  • Use more madvise calls to speed up things when memory is tight, in particular at load time but also for random accesses later.

  • Overhauled web interface

    The web interface now has a new design.

    The API documentation for ArangoDB has been moved from "Tools" to "Links" in the web interface.

    The "Applications" tab in the web interfaces has been renamed to "Services".

v2.6.12 (2015-12-02)

  • fixed disappearing of documents for collections transferred via sync if the the collection was dropped right before synchronization and drop and (re-)create collection markers were located in the same WAL file

  • added missing lock instruction for primary index in compactor size calculation

  • fixed issue #1589

  • fixed issue #1583

  • Foxx: optional configuration options no longer log validation errors when assigned empty values (#1495)

v2.6.11 (2015-11-18)

  • fixed potentially invalid pointer access in shaper when the currently accessed document got re-located by the WAL collector at the very same time

v2.6.10 (2015-11-10)

  • disable replication appliers when starting in modes --upgrade, --no-server and --check-upgrade

  • more detailed output in arango-dfdb

  • fixed potential deadlock in collection status changing on Windows

  • issue #1521: Can't dump/restore with user and password

v2.6.9 (2015-09-29)

  • added "special" password ARANGODB_DEFAULT_ROOT_PASSWORD. If you pass ARANGODB_DEFAULT_ROOT_PASSWORD as password, it will read the password from the environment variable ARANGODB_DEFAULT_ROOT_PASSWORD

  • fixed failing AQL skiplist, sort and limit combination

    When using a Skiplist index on an attribute (say "a") and then using sort and skip on this attribute caused the result to be empty e.g.:

    require("internal").db.test.ensureSkiplist("a"); require("internal").db._query("FOR x IN test SORT x.a LIMIT 10, 10");

    Was always empty no matter how many documents are stored in test. This is now fixed.

v2.6.8 (2015-09-09)

  • ARM only:

    The ArangoDB packages for ARM require the kernel to allow unaligned memory access. How the kernel handles unaligned memory access is configurable at runtime by checking and adjusting the contents /proc/cpu/alignment.

    In order to operate on ARM, ArangoDB requires the bit 1 to be set. This will make the kernel trap and adjust unaligned memory accesses. If this bit is not set, the kernel may send a SIGBUS signal to ArangoDB and terminate it.

    To set bit 1 in /proc/cpu/alignment use the following command as a privileged user (e.g. root):

    echo "2" > /proc/cpu/alignment
    

    Note that this setting affects all user processes and not just ArangoDB. Setting the alignment with the above command will also not make the setting permanent, so it will be lost after a restart of the system. In order to make the setting permanent, it should be executed during system startup or before starting arangod.

    The ArangoDB start/stop scripts do not adjust the alignment setting, but rely on the environment to have the correct alignment setting already. The reason for this is that the alignment settings also affect all other user processes (which ArangoDB is not aware of) and thus may have side-effects outside of ArangoDB. It is therefore more reasonable to have the system administrator carry out the change.

v2.6.7 (2015-08-25)

  • improved AssocMulti index performance when resizing.

    This makes the edge index perform less I/O when under memory pressure.

v2.6.6 (2015-08-23)

  • added startup option --server.additional-threads to create separate queues for slow requests.

v2.6.5 (2015-08-17)

  • added startup option --database.throw-collection-not-loaded-error

    Accessing a not-yet loaded collection will automatically load a collection on first access. This flag controls what happens in case an operation would need to wait for another thread to finalize loading a collection. If set to true, then the first operation that accesses an unloaded collection will load it. Further threads that try to access the same collection while it is still loading immediately fail with an error (1238, collection not loaded). This is to prevent all server threads from being blocked while waiting on the same collection to finish loading. When the first thread has completed loading the collection, the collection becomes regularly available, and all operations from that point on can be carried out normally, and error 1238 will not be thrown anymore for that collection.

    If set to false, the first thread that accesses a not-yet loaded collection will still load it. Other threads that try to access the collection while loading will not fail with error 1238 but instead block until the collection is fully loaded. This configuration might lead to all server threads being blocked because they are all waiting for the same collection to complete loading. Setting the option to true will prevent this from happening, but requires clients to catch error 1238 and react on it (maybe by scheduling a retry for later).

    The default value is false.

  • fixed busy wait loop in scheduler threads that sometimes consumed 100% CPU while waiting for events on connections closed unexpectedly by the client side

  • handle attribute indexBuckets when restoring collections via arangorestore. Previously the indexBuckets attribute value from the dump was ignored, and the server default value for indexBuckets was used when restoring a collection.

  • fixed "EscapeValue already set error" crash in V8 actions that might have occurred when canceling V8-based operations.

v2.6.4 (2015-08-01)

  • V8: Upgrade to version 4.1.0.27 - this is intended to be the stable V8 version.

  • fixed issue #1424: Arango shell should not processing arrows pushing on keyboard

v2.6.3 (2015-07-21)

  • issue #1409: Document values with null character truncated

v2.6.2 (2015-07-04)

  • fixed issue #1383: bindVars for HTTP API doesn't work with empty string

  • fixed handling of default values in Foxx manifest configurations

  • fixed handling of optional parameters in Foxx manifest configurations

  • fixed a reference error being thrown in Foxx queues when a function-based job type is used that is not available and no options object is passed to queue.push

v2.6.1 (2015-06-24)

  • Add missing swagger files to cmake build. fixes #1368

  • fixed documentation errors

v2.6.0 (2015-06-20)

  • using negative values for SimpleQuery.skip() is deprecated. This functionality will be removed in future versions of ArangoDB.

  • The following simple query functions are now deprecated:

    • collection.near
    • collection.within
    • collection.geo
    • collection.fulltext
    • collection.range
    • collection.closedRange

    This also lead to the following REST API methods being deprecated from now on:

    • PUT /_api/simple/near
    • PUT /_api/simple/within
    • PUT /_api/simple/fulltext
    • PUT /_api/simple/range

    It is recommended to replace calls to these functions or APIs with equivalent AQL queries, which are more flexible because they can be combined with other operations:

    FOR doc IN NEAR(@@collection, @latitude, @longitude, @limit)
      RETURN doc
    
    FOR doc IN WITHIN(@@collection, @latitude, @longitude, @radius, @distanceAttributeName)
      RETURN doc
    
    FOR doc IN FULLTEXT(@@collection, @attributeName, @queryString, @limit)
      RETURN doc
    
    FOR doc IN @@collection
      FILTER doc.value >= @left && doc.value < @right
      LIMIT @skip, @limit
      RETURN doc`
    

    The above simple query functions and REST API methods may be removed in future versions of ArangoDB.

  • deprecated now-obsolete AQL SKIPLIST function

    The function was introduced in older versions of ArangoDB with a less powerful query optimizer to retrieve data from a skiplist index using a LIMIT clause.

    Since 2.3 the same goal can be achieved by using regular AQL constructs, e.g.

    FOR doc IN collection FILTER doc.value >= @value SORT doc.value DESC LIMIT 1 RETURN doc
    
  • fixed issues when switching the database inside tasks and during shutdown of database cursors

    These features were added during 2.6 alpha stage so the fixes affect devel/2.6-alpha builds only

  • issue #1360: improved foxx-manager help

  • added --enable-tcmalloc configure option.

    When this option is set, arangod and the client tools will be linked against tcmalloc, which replaces the system allocator. When the option is set, a tcmalloc library must be present on the system under one of the names libtcmalloc, libtcmalloc_minimal or libtcmalloc_debug.

    As this is a configure option, it is supported for manual builds on Linux-like systems only. tcmalloc support is currently experimental.

  • issue #1353: Windows: HTTP API - incorrect path in errorMessage

  • issue #1347: added option --create-database for arangorestore.

    Setting this option to true will now create the target database if it does not exist. When creating the target database, the username and passwords passed to arangorestore will be used to create an initial user for the new database.

  • issue #1345: advanced debug information for User Functions

  • issue #1341: Can't use bindvars in UPSERT

  • fixed vulnerability in JWT implementation.

  • changed default value of option --database.ignore-datafile-errors from true to false

    If the new default value of false is used, then arangod will refuse loading collections that contain datafiles with CRC mismatches or other errors. A collection with datafile errors will then become unavailable. This prevents follow up errors from happening.

    The only way to access such collection is to use the datafile debugger (arango-dfdb) and try to repair or truncate the datafile with it.

    If --database.ignore-datafile-errors is set to true, then collections will become available even if parts of their data cannot be loaded. This helps availability, but may cause (partial) data loss and follow up errors.

  • added server startup option --server.session-timeout for controlling the timeout of user sessions in the web interface

  • add sessions and cookie authentication for ArangoDB's web interface

    ArangoDB's built-in web interface now uses sessions. Session information ids are stored in cookies, so clients using the web interface must accept cookies in order to use it

  • web interface: display query execution time in AQL editor

  • web interface: renamed AQL query submit button to execute

  • web interface: added query explain feature in AQL editor

  • web interface: demo page added. only working if demo data is available, hidden otherwise

  • web interface: added support for custom app scripts with optional arguments and results

  • web interface: mounted apps that need to be configured are now indicated in the app overview

  • web interface: added button for running tests to app details

  • web interface: added button for configuring app dependencies to app details

  • web interface: upgraded API documentation to use Swagger 2

  • INCOMPATIBLE CHANGE

    removed startup option --log.severity

    The docs for --log.severity mentioned lots of severities (e.g. exception, technical, functional, development) but only a few severities (e.g. all, human) were actually used, with human being the default and all enabling the additional logging of requests. So the option pretended to control a lot of things which it actually didn't. Additionally, the option --log.requests-file was around for a long time already, also controlling request logging.

    Because the --log.severity option effectively did not control that much, it was removed. A side effect of removing the option is that 2.5 installations which used --log.severity all will not log requests after the upgrade to 2.6. This can be adjusted by setting the --log.requests-file option.

  • add backtrace to fatal log events

  • added optional limit parameter for AQL function FULLTEXT

  • make fulltext index also index text values contained in direct sub-objects of the indexed attribute.

    Previous versions of ArangoDB only indexed the attribute value if it was a string. Sub-attributes of the index attribute were ignored when fulltext indexing.

    Now, if the index attribute value is an object, the object's values will each be included in the fulltext index if they are strings. If the index attribute value is an array, the array's values will each be included in the fulltext index if they are strings.

    For example, with a fulltext index present on the translations attribute, the following text values will now be indexed:

    var c = db._create("example");
    c.ensureFulltextIndex("translations");
    c.insert({ translations: { en: "fox", de: "Fuchs", fr: "renard", ru: "лиса" } });
    c.insert({ translations: "Fox is the English translation of the German word Fuchs" });
    c.insert({ translations: [ "ArangoDB", "document", "database", "Foxx" ] });
    
    c.fulltext("translations", "лиса").toArray();       // returns only first document
    c.fulltext("translations", "Fox").toArray();        // returns first and second documents
    c.fulltext("translations", "prefix:Fox").toArray(); // returns all three documents
    
  • added batch document removal and lookup commands:

    collection.lookupByKeys(keys)
    collection.removeByKeys(keys)
    

    These commands can be used to perform multi-document lookup and removal operations efficiently from the ArangoShell. The argument to these operations is an array of document keys.

    Also added HTTP APIs for batch document commands:

    • PUT /_api/simple/lookup-by-keys
    • PUT /_api/simple/remove-by-keys
  • properly prefix document address URLs with the current database name for calls to the REST API method GET /_api/document?collection=... (that method will return partial URLs to all documents in the collection).

    Previous versions of ArangoDB returned the URLs starting with /_api/ but without the current database name, e.g. /_api/document/mycollection/mykey. Starting with 2.6, the response URLs will include the database name as well, e.g. /_db/_system/_api/document/mycollection/mykey.

  • added dedicated collection export HTTP REST API

    ArangoDB now provides a dedicated collection export API, which can take snapshots of entire collections more efficiently than the general-purpose cursor API. The export API is useful to transfer the contents of an entire collection to a client application. It provides optional filtering on specific attributes.

    The export API is available at endpoint POST /_api/export?collection=.... The API has the same return value structure as the already established cursor API (POST /_api/cursor).

    An introduction to the export API is given in this blog post: http://jsteemann.github.io/blog/2015/04/04/more-efficient-data-exports/

  • subquery optimizations for AQL queries

    This optimization avoids copying intermediate results into subqueries that are not required by the subquery.

    A brief description can be found here: http://jsteemann.github.io/blog/2015/05/04/subquery-optimizations/

  • return value optimization for AQL queries

    This optimization avoids copying the final query result inside the query's main ReturnNode.

    A brief description can be found here: http://jsteemann.github.io/blog/2015/05/04/return-value-optimization-for-aql/

  • speed up AQL queries containing big IN lists for index lookups

    IN lists used for index lookups had performance issues in previous versions of ArangoDB. These issues have been addressed in 2.6 so using bigger IN lists for filtering is much faster.

    A brief description can be found here: http://jsteemann.github.io/blog/2015/05/07/in-list-improvements/

  • allow @ and . characters in document keys, too

    This change also leads to document keys being URL-encoded when returned in HTTP location response headers.

  • added alternative implementation for AQL COLLECT

    The alternative method uses a hash table for grouping and does not require its input elements to be sorted. It will be taken into account by the optimizer for COLLECT statements that do not use an INTO clause.

    In case a COLLECT statement can use the hash table variant, the optimizer will create an extra plan for it at the beginning of the planning phase. In this plan, no extra SORT node will be added in front of the COLLECT because the hash table variant of COLLECT does not require sorted input. Instead, a SORT node will be added after it to sort its output. This SORT node may be optimized away again in later stages. If the sort order of the result is irrelevant to the user, adding an extra SORT null after a hash COLLECT operation will allow the optimizer to remove the sorts altogether.

    In addition to the hash table variant of COLLECT, the optimizer will modify the original plan to use the regular COLLECT implementation. As this implementation requires sorted input, the optimizer will insert a SORT node in front of the COLLECT. This SORT node may be optimized away in later stages.

    The created plans will then be shipped through the regular optimization pipeline. In the end, the optimizer will pick the plan with the lowest estimated total cost as usual. The hash table variant does not require an up-front sort of the input, and will thus be preferred over the regular COLLECT if the optimizer estimates many input elements for the COLLECT node and cannot use an index to sort them.

    The optimizer can be explicitly told to use the regular sorted variant of COLLECT by suffixing a COLLECT statement with OPTIONS { "method" : "sorted" }. This will override the optimizer guesswork and only produce the sorted variant of COLLECT.

    A blog post on the new COLLECT implementation can be found here: http://jsteemann.github.io/blog/2015/04/22/collecting-with-a-hash-table/

  • refactored HTTP REST API for cursors

    The HTTP REST API for cursors (/_api/cursor) has been refactored to improve its performance and use less memory.

    A post showing some of the performance improvements can be found here: http://jsteemann.github.io/blog/2015/04/01/improvements-for-the-cursor-api/

  • simplified return value syntax for data-modification AQL queries

    ArangoDB 2.4 since version allows to return results from data-modification AQL queries. The syntax for this was quite limited and verbose:

    FOR i IN 1..10
      INSERT { value: i } IN test
      LET inserted = NEW
      RETURN inserted
    

    The LET inserted = NEW RETURN inserted was required literally to return the inserted documents. No calculations could be made using the inserted documents.

    This is now more flexible. After a data-modification clause (e.g. INSERT, UPDATE, REPLACE, REMOVE, UPSERT) there can follow any number of LET calculations. These calculations can refer to the pseudo-values OLD and NEW that are created by the data-modification statements.

    This allows returning projections of inserted or updated documents, e.g.:

    FOR i IN 1..10
      INSERT { value: i } IN test
      RETURN { _key: NEW._key, value: i }
    

    Still not every construct is allowed after a data-modification clause. For example, no functions can be called that may access documents.

    More information can be found here: http://jsteemann.github.io/blog/2015/03/27/improvements-for-data-modification-queries/

  • added AQL UPSERT statement

    This adds an UPSERT statement to AQL that is a combination of both INSERT and UPDATE / REPLACE. The UPSERT will search for a matching document using a user-provided example. If no document matches the example, the insert part of the UPSERT statement will be executed. If there is a match, the update / replace part will be carried out:

    UPSERT { page: 'index.html' }                 /* search example */
      INSERT { page: 'index.html', pageViews: 1 } /* insert part */
      UPDATE { pageViews: OLD.pageViews + 1 }     /* update part */
      IN pageViews
    

    UPSERT can be used with an UPDATE or REPLACE clause. The UPDATE clause will perform a partial update of the found document, whereas the REPLACE clause will replace the found document entirely. The UPDATE or REPLACE parts can refer to the pseudo-value OLD, which contains all attributes of the found document.

    UPSERT statements can optionally return values. In the following query, the return attribute found will return the found document before the UPDATE was applied. If no document was found, found will contain a value of null. The updated result attribute will contain the inserted / updated document:

    UPSERT { page: 'index.html' }                 /* search example */
      INSERT { page: 'index.html', pageViews: 1 } /* insert part */
      UPDATE { pageViews: OLD.pageViews + 1 }     /* update part */
      IN pageViews
      RETURN { found: OLD, updated: NEW }
    

    A more detailed description of UPSERT can be found here: http://jsteemann.github.io/blog/2015/03/27/preview-of-the-upsert-command/

  • adjusted default configuration value for --server.backlog-size from 10 to 64.

  • issue #1231: bug xor feature in AQL: LENGTH(null) == 4

    This changes the behavior of the AQL LENGTH function as follows:

    • if the single argument to LENGTH() is null, then the result will now be 0. In previous versions of ArangoDB, the result of LENGTH(null) was 4.

    • if the single argument to LENGTH() is true, then the result will now be 1. In previous versions of ArangoDB, the result of LENGTH(true) was 4.

    • if the single argument to LENGTH() is false, then the result will now be 0. In previous versions of ArangoDB, the result of LENGTH(false) was 5.

    The results of LENGTH() with string, numeric, array object argument values do not change.

  • issue #1298: Bulk import if data already exists (#1298)

    This change extends the HTTP REST API for bulk imports as follows:

    When documents are imported and the _key attribute is specified for them, the import can be used for inserting and updating/replacing documents. Previously, the import could be used for inserting new documents only, and re-inserting a document with an existing key would have failed with a unique key constraint violated error.

    The above behavior is still the default. However, the API now allows controlling the behavior in case of a unique key constraint error via the optional URL parameter onDuplicate.

    This parameter can have one of the following values:

    • error: when a unique key constraint error occurs, do not import or update the document but report an error. This is the default.

    • update: when a unique key constraint error occurs, try to (partially) update the existing document with the data specified in the import. This may still fail if the document would violate secondary unique indexes. Only the attributes present in the import data will be updated and other attributes already present will be preserved. The number of updated documents will be reported in the updated attribute of the HTTP API result.

    • replace: when a unique key constraint error occurs, try to fully replace the existing document with the data specified in the import. This may still fail if the document would violate secondary unique indexes. The number of replaced documents will be reported in the updated attribute of the HTTP API result.

    • ignore: when a unique key constraint error occurs, ignore this error. There will be no insert, update or replace for the particular document. Ignored documents will be reported separately in the ignored attribute of the HTTP API result.

    The result of the HTTP import API will now contain the attributes ignored and updated, which contain the number of ignored and updated documents respectively. These attributes will contain a value of zero unless the onDuplicate URL parameter is set to either update or replace (in this case the updated attribute may contain non-zero values) or ignore (in this case the ignored attribute may contain a non-zero value).

    To support the feature, arangoimp also has a new command line option --on-duplicate which can have one of the values error, update, replace, ignore. The default value is error.

    A few examples for using arangoimp with the --on-duplicate option can be found here: http://jsteemann.github.io/blog/2015/04/14/updating-documents-with-arangoimp/

  • changed behavior of db._query() in the ArangoShell:

    if the command's result is printed in the shell, the first 10 results will be printed. Previously only a basic description of the underlying query result cursor was printed. Additionally, if the cursor result contains more than 10 results, the cursor is assigned to a global variable more, which can be used to iterate over the cursor result.

    Example:

    arangosh [_system]> db._query("FOR i IN 1..15 RETURN i")
    [object ArangoQueryCursor, count: 15, hasMore: true]
    
    [
      1,
      2,
      3,
      4,
      5,
      6,
      7,
      8,
      9,
      10
    ]
    
    type 'more' to show more documents
    
    
    arangosh [_system]> more
    [object ArangoQueryCursor, count: 15, hasMore: false]
    
    [
      11,
      12,
      13,
      14,
      15
    ]
    
  • Disallow batchSize value 0 in HTTP POST /_api/cursor:

    The HTTP REST API POST /_api/cursor does not accept a batchSize parameter value of 0 any longer. A batch size of 0 never made much sense, but previous versions of ArangoDB did not check for this value. Now creating a cursor using a batchSize value 0 will result in an HTTP 400 error response

  • REST Server: fix memory leaks when failing to add jobs

  • 'EDGES' AQL Function

    The AQL function EDGES got a new fifth option parameter. Right now only one option is available: 'includeVertices'. This is a boolean parameter that allows to modify the result of the EDGES function. Default is 'includeVertices: false' which does not have any effect. 'includeVertices: true' modifies the result, such that {vertex: , edge: } is returned.

  • INCOMPATIBLE CHANGE:

    The result format of the AQL function NEIGHBORS has been changed. Before it has returned an array of objects containing 'vertex' and 'edge'. Now it will only contain the vertex directly. Also an additional option 'includeData' has been added. This is used to define if only the 'vertex._id' value should be returned (false, default), or if the vertex should be looked up in the collection and the complete JSON should be returned (true). Using only the id values can lead to significantly improved performance if this is the only information required.

    In order to get the old result format prior to ArangoDB 2.6, please use the function EDGES instead. Edges allows for a new option 'includeVertices' which, set to true, returns exactly the format of NEIGHBORS. Example:

    NEIGHBORS(<vertexCollection>, <edgeCollection>, <vertex>, <direction>, <example>)
    

    This can now be achieved by:

    EDGES(<edgeCollection>, <vertex>, <direction>, <example>, {includeVertices: true})
    

    If you are nesting several NEIGHBORS steps you can speed up their performance in the following way:

    Old Example:

    FOR va IN NEIGHBORS(Users, relations, 'Users/123', 'outbound') FOR vc IN NEIGHBORS(Products, relations, va.vertex._id, 'outbound') RETURN vc

    This can now be achieved by:

    FOR va IN NEIGHBORS(Users, relations, 'Users/123', 'outbound') FOR vc IN NEIGHBORS(Products, relations, va, 'outbound', null, {includeData: true}) RETURN vc ^^^^ ^^^^^^^^^^^^^^^^^^^ Use intermediate directly include Data for final

  • INCOMPATIBLE CHANGE:

    The AQL function GRAPH_NEIGHBORS now provides an additional option includeData. This option allows controlling whether the function should return the complete vertices or just their IDs. Returning only the IDs instead of the full vertices can lead to improved performance .

    If provided, includeData is set to true, all vertices in the result will be returned with all their attributes. The default value of includeData is false. This makes the default function results incompatible with previous versions of ArangoDB.

    To get the old result style in ArangoDB 2.6, please set the options as follows in calls to GRAPH_NEIGHBORS:

    GRAPH_NEIGHBORS(<graph>, <vertex>, { includeData: true })
    
  • INCOMPATIBLE CHANGE:

    The AQL function GRAPH_COMMON_NEIGHBORS now provides an additional option includeData. This option allows controlling whether the function should return the complete vertices or just their IDs. Returning only the IDs instead of the full vertices can lead to improved performance .

    If provided, includeData is set to true, all vertices in the result will be returned with all their attributes. The default value of includeData is false. This makes the default function results incompatible with previous versions of ArangoDB.

    To get the old result style in ArangoDB 2.6, please set the options as follows in calls to GRAPH_COMMON_NEIGHBORS:

    GRAPH_COMMON_NEIGHBORS(<graph>, <vertexExamples1>, <vertexExamples2>, { includeData: true }, { includeData: true })
    
  • INCOMPATIBLE CHANGE:

    The AQL function GRAPH_SHORTEST_PATH now provides an additional option includeData. This option allows controlling whether the function should return the complete vertices and edges or just their IDs. Returning only the IDs instead of full vertices and edges can lead to improved performance .

    If provided, includeData is set to true, all vertices and edges in the result will be returned with all their attributes. There is also an optional parameter includePath of type object. It has two optional sub-attributes vertices and edges, both of type boolean. Both can be set individually and the result will include all vertices on the path if includePath.vertices == true and all edges if includePath.edges == true respectively.

    The default value of includeData is false, and paths are now excluded by default. This makes the default function results incompatible with previous versions of ArangoDB.

    To get the old result style in ArangoDB 2.6, please set the options as follows in calls to GRAPH_SHORTEST_PATH:

    GRAPH_SHORTEST_PATH(<graph>, <source>, <target>, { includeData: true, includePath: { edges: true, vertices: true } })
    

    The attributes startVertex and vertex that were present in the results of GRAPH_SHORTEST_PATH in previous versions of ArangoDB will not be produced in 2.6. To calculate these attributes in 2.6, please extract the first and last elements from the vertices result attribute.

  • INCOMPATIBLE CHANGE:

    The AQL function GRAPH_DISTANCE_TO will now return only the id the destination vertex in the vertex attribute, and not the full vertex data with all vertex attributes.

  • INCOMPATIBLE CHANGE:

    All graph measurements functions in JavaScript module general-graph that calculated a single figure previously returned an array containing just the figure. Now these functions will return the figure directly and not put it inside an array.

    The affected functions are:

    • graph._absoluteEccentricity
    • graph._eccentricity
    • graph._absoluteCloseness
    • graph._closeness
    • graph._absoluteBetweenness
    • graph._betweenness
    • graph._radius
    • graph._diameter
  • Create the _graphs collection in new databases with waitForSync attribute set to false

    The previous waitForSync value was true, so default the behavior when creating and dropping graphs via the HTTP REST API changes as follows if the new settings are in effect:

    • POST /_api/graph by default returns HTTP 202 instead of HTTP 201
    • DELETE /_api/graph/graph-name by default returns HTTP 202 instead of HTTP 201

    If the _graphs collection still has its waitForSync value set to true, then the HTTP status code will not change.

  • Upgraded ICU to version 54; this increases performance in many places. based on https://code.google.com/p/chromium/issues/detail?id=428145

  • added support for HTTP push aka chunked encoding

  • issue #1051: add info whether server is running in service or user mode?

    This will add a "mode" attribute to the result of the result of HTTP GET /_api/version?details=true

    "mode" can have the following values:

    • standalone: server was started manually (e.g. on command-line)
    • service: service is running as Windows service, in daemon mode or under the supervisor
  • improve system error messages in Windows port

  • increased default value of --server.request-timeout from 300 to 1200 seconds for client tools (arangosh, arangoimp, arangodump, arangorestore)

  • increased default value of --server.connect-timeout from 3 to 5 seconds for client tools (arangosh, arangoimp, arangodump, arangorestore)

  • added startup option --server.foxx-queues-poll-interval

    This startup option controls the frequency with which the Foxx queues manager is checking the queue (or queues) for jobs to be executed.

    The default value is 1 second. Lowering this value will result in the queue manager waking up and checking the queues more frequently, which may increase CPU usage of the server. When not using Foxx queues, this value can be raised to save some CPU time.

  • added startup option --server.foxx-queues

    This startup option controls whether the Foxx queue manager will check queue and job entries. Disabling this option can reduce server load but will prevent jobs added to Foxx queues from being processed at all.

    The default value is true, enabling the Foxx queues feature.

  • make Foxx queues really database-specific.

    Foxx queues were and are stored in a database-specific collection _queues. However, a global cache variable for the queues led to the queue names being treated database-independently, which was wrong.

    Since 2.6, Foxx queues names are truly database-specific, so the same queue name can be used in two different databases for two different queues. Until then, it is advisable to think of queues as already being database-specific, and using the database name as a queue name prefix to be avoid name conflicts, e.g.:

    var queueName = "myQueue";
    var Foxx = require("org/arangodb/foxx");
    Foxx.queues.create(db._name() + ":" + queueName);
    
  • added support for Foxx queue job types defined as app scripts.

    The old job types introduced in 2.4 are still supported but are known to cause issues in 2.5 and later when the server is restarted or the job types are not defined in every thread.

    The new job types avoid this issue by storing an explicit mount path and script name rather than an assuming the job type is defined globally. It is strongly recommended to convert your job types to the new script-based system.

  • renamed Foxx sessions option "sessionStorageApp" to "sessionStorage". The option now also accepts session storages directly.

  • Added the following JavaScript methods for file access:

    • fs.copyFile() to copy single files
    • fs.copyRecursive() to copy directory trees
    • fs.chmod() to set the file permissions (non-Windows only)
  • Added process.env for accessing the process environment from JavaScript code

  • Cluster: kickstarter shutdown routines will more precisely follow the shutdown of its nodes.

  • Cluster: don't delete agency connection objects that are currently in use.

  • Cluster: improve passing along of HTTP errors

  • fixed issue #1247: debian init script problems

  • multi-threaded index creation on collection load

    When a collection contains more than one secondary index, they can be built in memory in parallel when the collection is loaded. How many threads are used for parallel index creation is determined by the new configuration parameter --database.index-threads. If this is set to 0, indexes are built by the opening thread only and sequentially. This is equivalent to the behavior in 2.5 and before.

  • speed up building up primary index when loading collections

  • added count attribute to parameters.json files of collections. This attribute indicates the number of live documents in the collection on unload. It is read when the collection is (re)loaded to determine the initial size for the collection's primary index

  • removed remainders of MRuby integration, removed arangoirb

  • simplified controllers property in Foxx manifests. You can now specify a filename directly if you only want to use a single file mounted at the base URL of your Foxx app.

  • simplified exports property in Foxx manifests. You can now specify a filename directly if you only want to export variables from a single file in your Foxx app.

  • added support for node.js-style exports in Foxx exports. Your Foxx exports file can now export arbitrary values using the module.exports property instead of adding properties to the exports object.

  • added scripts property to Foxx manifests. You should now specify the setup and teardown files as properties of the scripts object in your manifests and can define custom, app-specific scripts that can be executed from the web interface or the CLI.

  • added tests property to Foxx manifests. You can now define test cases using the mocha framework which can then be executed inside ArangoDB.

  • updated joi package to 6.0.8.

  • added extendible package.

  • added Foxx model lifecycle events to repositories. See #1257.

  • speed up resizing of edge index.

  • allow to split an edge index into buckets which are resized individually. This is controlled by the indexBuckets attribute in the properties of the collection.

  • fix a cluster deadlock bug in larger clusters by marking a thread waiting for a lock on a DBserver as blocked

v2.5.7 (2015-08-02)

  • V8: Upgrade to version 4.1.0.27 - this is intended to be the stable V8 version.

v2.5.6 (2015-07-21)

  • alter Windows build infrastructure so we can properly store pdb files.

  • potentially fixed issue #1313: Wrong metric calculation at dashboard

    Escape whitespace in process name when scanning /proc/pid/stats

    This fixes statistics values read from that file

  • Fixed variable naming in AQL COLLECT INTO results in case the COLLECT is placed in a subquery which itself is followed by other constructs that require variables

v2.5.5 (2015-05-29)

  • fixed vulnerability in JWT implementation.

  • fixed format string for reading /proc/pid/stat

  • take into account barriers used in different V8 contexts

v2.5.4 (2015-05-14)

  • added startup option --log.performance: specifying this option at startup will log performance-related info messages, mainly timings via the regular logging mechanisms

  • cluster fixes

  • fix for recursive copy under Windows

v2.5.3 (2015-04-29)

  • Fix fs.move to work across filesystem borders; Fixes Foxx app installation problems; issue #1292.

  • Fix Foxx app install when installed on a different drive on Windows

  • issue #1322: strange AQL result

  • issue #1318: Inconsistent db._create() syntax

  • issue #1315: queries to a collection fail with an empty response if the collection contains specific JSON data

  • issue #1300: Make arangodump not fail if target directory exists but is empty

  • allow specifying higher values than SOMAXCONN for --server.backlog-size

    Previously, arangod would not start when a --server.backlog-size value was specified that was higher than the platform's SOMAXCONN header value.

    Now, arangod will use the user-provided value for --server.backlog-size and pass it to the listen system call even if the value is higher than SOMAXCONN. If the user-provided value is higher than SOMAXCONN, arangod will log a warning on startup.

  • Fixed a cluster deadlock bug. Mark a thread that is in a RemoteBlock as blocked to allow for additional dispatcher threads to be started.

  • Fix locking in cluster by using another ReadWriteLock class for collections.

  • Add a second DispatcherQueue for AQL in the cluster. This fixes a cluster-AQL thread explosion bug.

v2.5.2 (2015-04-11)

  • modules stored in _modules are automatically flushed when changed

  • added missing query-id parameter in documentation of HTTP DELETE /_api/query endpoint

  • added iterator for edge index in AQL queries

    this change may lead to less edges being read when used together with a LIMIT clause

  • make graph viewer in web interface issue less expensive queries for determining a random vertex from the graph, and for determining vertex attributes

  • issue #1285: syntax error, unexpected $undefined near '@_to RETURN obj

    this allows AQL bind parameter names to also start with underscores

  • moved /_api/query to C++

  • issue #1289: Foxx models created from database documents expose an internal method

  • added Foxx.Repository#exists

  • parallelize initialization of V8 context in multiple threads

  • fixed a possible crash when the debug-level was TRACE

  • cluster: do not initialize statistics collection on each coordinator, this fixes a race condition at startup

  • cluster: fix a startup race w.r.t. the _configuration collection

  • search for db:// JavaScript modules only after all local files have been considered, this speeds up the require command in a cluster considerably

  • general cluster speedup in certain areas

v2.5.1 (2015-03-19)

  • fixed bug that caused undefined behavior when an AQL query was killed inside a calculation block

  • fixed memleaks in AQL query cleanup in case out-of-memory errors are thrown

  • by default, Debian and RedHat packages are built with debug symbols

  • added option --database.ignore-logfile-errors

    This option controls how collection datafiles with a CRC mismatch are treated.

    If set to false, CRC mismatch errors in collection datafiles will lead to a collection not being loaded at all. If a collection needs to be loaded during WAL recovery, the WAL recovery will also abort (if not forced with --wal.ignore-recovery-errors true). Setting this flag to false protects users from unintentionally using a collection with corrupted datafiles, from which only a subset of the original data can be recovered.

    If set to true, CRC mismatch errors in collection datafiles will lead to the datafile being partially loaded. All data up to until the mismatch will be loaded. This will enable users to continue with collection datafiles that are corrupted, but will result in only a partial load of the data. The WAL recovery will still abort when encountering a collection with a corrupted datafile, at least if --wal.ignore-recovery-errors is not set to true.

    The default value is true, so for collections with corrupted datafiles there might be partial data loads once the WAL recovery has finished. If the WAL recovery will need to load a collection with a corrupted datafile, it will still stop when using the default values.

  • INCOMPATIBLE CHANGE:

    make the arangod server refuse to start if during startup it finds a non-readable parameter.json file for a database or a collection.

    Stopping the startup process in this case requires manual intervention (fixing the unreadable files), but prevents follow-up errors due to ignored databases or collections from happening.

  • datafiles and parameter.json files written by arangod are now created with read and write privileges for the arangod process user, and with read and write privileges for the arangod process group.

    Previously, these files were created with user read and write permissions only.

  • INCOMPATIBLE CHANGE:

    abort WAL recovery if one of the collection's datafiles cannot be opened

  • INCOMPATIBLE CHANGE:

    never try to raise the privileges after dropping them, this can lead to a race condition while running the recovery

    If you require to run ArangoDB on a port lower than 1024, you must run ArangoDB as root.

  • fixed inefficiencies in remove methods of general-graph module

  • added option --database.slow-query-threshold for controlling the default AQL slow query threshold value on server start

  • add system error strings for Windows on many places

  • rework service startup so we announce 'RUNNING' only when we're finished starting.

  • use the Windows eventlog for FATAL and ERROR - log messages

  • fix service handling in NSIS Windows installer, specify human readable name

  • add the ICU_DATA environment variable to the fatal error messages

  • fixed issue #1265: arangod crashed with SIGSEGV

  • fixed issue #1241: Wildcards in examples

v2.5.0 (2015-03-09)

  • installer fixes for Windows

  • fix for downloading Foxx

  • fixed issue #1258: http pipelining not working?

v2.5.0-beta4 (2015-03-05)

  • fixed issue #1247: debian init script problems

v2.5.0-beta3 (2015-02-27)

  • fix Windows install path calculation in arango

  • fix Windows logging of long strings

  • fix possible undefinedness of const strings in Windows

v2.5.0-beta2 (2015-02-23)

  • fixed issue #1256: agency binary not found #1256

  • fixed issue #1230: API: document/col-name/_key and cursor return different floats

  • front-end: dashboard tries not to (re)load statistics if user has no access

  • V8: Upgrade to version 3.31.74.1

  • etcd: Upgrade to version 2.0 - This requires go 1.3 to compile at least.

  • refuse to startup if ICU wasn't initialized, this will i.e. prevent errors from being printed, and libraries from being loaded.

  • front-end: unwanted removal of index table header after creating new index

  • fixed issue #1248: chrome: applications filtering not working

  • fixed issue #1198: queries remain in aql editor (front-end) if you navigate through different tabs

  • Simplify usage of Foxx

    Thanks to our user feedback we learned that Foxx is a powerful, yet rather complicated concept. With this release we tried to make it less complicated while keeping all its strength. That includes a rewrite of the documentation as well as some code changes as listed below:

    • Moved Foxx applications to a different folder.

      The naming convention now is: /_db///APP Before it was: /databases//: This caused some trouble as apps where cached based on name and version and updates did not apply. Hence the path on filesystem and the app's access URL had no relation to one another. Now the path on filesystem is identical to the URL (except for slashes and the appended APP)

    • Rewrite of Foxx routing

      The routing of Foxx has been exposed to major internal changes we adjusted because of user feedback. This allows us to set the development mode per mountpoint without having to change paths and hold apps at separate locations.

    • Foxx Development mode

      The development mode used until 2.4 is gone. It has been replaced by a much more mature version. This includes the deprecation of the javascript.dev-app-path parameter, which is useless since 2.5. Instead of having two separate app directories for production and development, apps now reside in one place, which is used for production as well as for development. Apps can still be put into development mode, changing their behavior compared to production mode. Development mode apps are still reread from disk at every request, and still they ship more debug output.

      This change has also made the startup options --javascript.frontend-development-mode and --javascript.dev-app-path obsolete. The former option will not have any effect when set, and the latter option is only read and used during the upgrade to 2.5 and does not have any effects later.

    • Foxx install process

      Installing Foxx apps has been a two step process: import them into ArangoDB and mount them at a specific mountpoint. These operations have been joined together. You can install an app at one mountpoint, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been simplified to just:

      • install: get your Foxx app up and running
      • uninstall: shut it down and erase it from disk
    • Foxx error output

      Until 2.4 the errors produced by Foxx were not optimal. Often, the error message was just unable to parse manifest and contained only an internal stack trace. In 2.5 we made major improvements there, including a much more fine-grained error output that helps you debug your Foxx apps. The error message printed is now much closer to its source and should help you track it down.

      Also we added the default handlers for unhandled errors in Foxx apps:

      • You will get a nice internal error page whenever your Foxx app is called but was not installed due to any error
      • You will get a proper error message when having an uncaught error appears in any app route

      In production mode the messages above will NOT contain any information about your Foxx internals and are safe to be exposed to third party users. In development mode the messages above will contain the stacktrace (if available), making it easier for your in-house devs to track down errors in the application.

  • added console object to Foxx apps. All Foxx apps now have a console object implementing the familiar Console API in their global scope, which can be used to log diagnostic messages to the database.

  • added org/arangodb/request module, which provides a simple API for making HTTP requests to external services.

  • added optimizer rule propagate-constant-attributes

    This rule will look inside FILTER conditions for constant value equality comparisons, and insert the constant values in other places in FILTERs. For example, the rule will insert 42 instead of i.value in the second FILTER of the following query:

    FOR i IN c1 FOR j IN c2 FILTER i.value == 42 FILTER j.value == i.value RETURN 1
    
  • added filtered value to AQL query execution statistics

    This value indicates how many documents were filtered by FilterNodes in the AQL query. Note that IndexRangeNodes can also filter documents by selecting only the required ranges from the index. The filtered value will not include the work done by IndexRangeNodes, but only the work performed by FilterNodes.

  • added support for sparse hash and skiplist indexes

    Hash and skiplist indexes can optionally be made sparse. Sparse indexes exclude documents in which at least one of the index attributes is either not set or has a value of null.

    As such documents are excluded from sparse indexes, they may contain fewer documents than their non-sparse counterparts. This enables faster indexing and can lead to reduced memory usage in case the indexed attribute does occur only in some, but not all documents of the collection. Sparse indexes will also reduce the number of collisions in non-unique hash indexes in case non-existing or optional attributes are indexed.

    In order to create a sparse index, an object with the attribute sparse can be added to the index creation commands:

    db.collection.ensureHashIndex(attributeName, { sparse: true });
    db.collection.ensureHashIndex(attributeName1, attributeName2, { sparse: true });
    db.collection.ensureUniqueConstraint(attributeName, { sparse: true });
    db.collection.ensureUniqueConstraint(attributeName1, attributeName2, { sparse: true });
    
    db.collection.ensureSkiplist(attributeName, { sparse: true });
    db.collection.ensureSkiplist(attributeName1, attributeName2, { sparse: true });
    db.collection.ensureUniqueSkiplist(attributeName, { sparse: true });
    db.collection.ensureUniqueSkiplist(attributeName1, attributeName2, { sparse: true });
    

    Note that in place of the above specialized index creation commands, it is recommended to use the more general index creation command ensureIndex:

    db.collection.ensureIndex({ type: "hash", sparse: true, unique: true, fields: [ attributeName ] });
    db.collection.ensureIndex({ type: "skiplist", sparse: false, unique: false, fields: [ "a", "b" ] });

    When not explicitly set, the sparse attribute defaults to false for new indexes.

    This causes a change in behavior when creating a unique hash index without specifying the sparse flag: in 2.4, unique hash indexes were implicitly sparse, always excluding null values. There was no option to control this behavior, and sparsity was neither supported for non-unique hash indexes nor skiplists in 2.4. This implicit sparsity of unique hash indexes was considered an inconsistency, and therefore the behavior was cleaned up in 2.5. As of 2.5, indexes will only be created sparse if sparsity is explicitly requested. Existing unique hash indexes from 2.4 or before will automatically be migrated so they are still sparse after the upgrade to 2.5.

    Geo indexes are implicitly sparse, meaning documents without the indexed location attribute or containing invalid location coordinate values will be excluded from the index automatically. This is also a change when compared to pre-2.5 behavior, when documents with missing or invalid coordinate values may have caused errors on insertion when the geo index' unique flag was set and its ignoreNull flag was not.

    This was confusing and has been rectified in 2.5. The method ensureGeoConstaint() now does the same as ensureGeoIndex(). Furthermore, the attributes constraint, unique, ignoreNull and sparse flags are now completely ignored when creating geo indexes.

    The same is true for fulltext indexes. There is no need to specify non-uniqueness or sparsity for geo or fulltext indexes. They will always be non-unique and sparse.

    As sparse indexes may exclude some documents, they cannot be used for every type of query. Sparse hash indexes cannot be used to find documents for which at least one of the indexed attributes has a value of null. For example, the following AQL query cannot use a sparse index, even if one was created on attribute attr:

    FOR doc In collection
      FILTER doc.attr == null
      RETURN doc
    

    If the lookup value is non-constant, a sparse index may or may not be used, depending on the other types of conditions in the query. If the optimizer can safely determine that the lookup value cannot be null, a sparse index may be used. When uncertain, the optimizer will not make use of a sparse index in a query in order to produce correct results.

    For example, the following queries cannot use a sparse index on attr because the optimizer will not know beforehand whether the comparison values for doc.attr will include null:

    FOR doc In collection
      FILTER doc.attr == SOME_FUNCTION(...)
      RETURN doc
    
    FOR other IN otherCollection
      FOR doc In collection
        FILTER doc.attr == other.attr
        RETURN doc
    

    Sparse skiplist indexes can be used for sorting if the optimizer can safely detect that the index range does not include null for any of the index attributes.

  • inspection of AQL data-modification queries will now detect if the data-modification part of the query can run in lockstep with the data retrieval part of the query, or if the data retrieval part must be executed before the data modification can start.

    Executing the two in lockstep allows using much smaller buffers for intermediate results and starts the actual data-modification operations much earlier than if the two phases were executed separately.

  • Allow dynamic attribute names in AQL object literals

    This allows using arbitrary expressions to construct attribute names in object literals specified in AQL queries. To disambiguate expressions and other unquoted attribute names, dynamic attribute names need to be enclosed in brackets ([ and ]). Example:

    FOR i IN 1..100
      RETURN { [ CONCAT('value-of-', i) ] : i }
    
  • make AQL optimizer rule "use-index-for-sort" remove sort also in case a non-sorted index (e.g. a hash index) is used for only equality lookups and all sort attributes are covered by the index.

    Example that does not require an extra sort (needs hash index on value):

    FOR doc IN collection FILTER doc.value == 1 SORT doc.value RETURN doc
    

    Another example that does not require an extra sort (with hash index on value1, value2):

    FOR doc IN collection FILTER doc.value1 == 1 && doc.value2 == 2 SORT doc.value1, doc.value2 RETURN doc
    
  • make AQL optimizer rule "use-index-for-sort" remove sort also in case the sort criteria excludes the left-most index attributes, but the left-most index attributes are used by the index for equality-only lookups.

    Example that can use the index for sorting (needs skiplist index on value1, value2):

    FOR doc IN collection FILTER doc.value1 == 1 SORT doc.value2 RETURN doc
    
  • added selectivity estimates for primary index, edge index, and hash index

    The selectivity estimates are returned by the GET /_api/index REST API method in a sub-attribute selectivityEstimate for each index that supports it. This attribute will be omitted for indexes that do not provide selectivity estimates. If provided, the selectivity estimate will be a numeric value between 0 and 1.

    Selectivity estimates will also be reported in the result of collection.getIndexes() for all indexes that support this. If no selectivity estimate can be determined for an index, the attribute selectivityEstimate will be omitted here, too.

    The web interface also shows selectivity estimates for each index that supports this.

    Currently the following index types can provide selectivity estimates:

    • primary index
    • edge index
    • hash index (unique and non-unique)

    No selectivity estimates will be provided when running in cluster mode.

  • fixed issue #1226: arangod log issues

  • added additional logger if arangod is started in foreground mode on a tty

  • added AQL optimizer rule "move-calculations-down"

  • use exclusive native SRWLocks on Windows instead of native mutexes

  • added AQL functions MD5, SHA1, and RANDOM_TOKEN.

  • reduced number of string allocations when parsing certain AQL queries

    parsing numbers (integers or doubles) does not require a string allocation per number anymore

  • RequestContext#bodyParam now accepts arbitrary joi schemas and rejects invalid (but well-formed) request bodies.

  • enforce that AQL user functions are wrapped inside JavaScript function () declarations

    AQL user functions were always expected to be wrapped inside a JavaScript function, but previously this was not enforced when registering a user function. Enforcing the AQL user functions to be contained inside functions prevents functions from doing some unexpected things that may have led to undefined behavior.

  • Windows service uninstalling: only remove service if it points to the currently running binary, or --force was specified.

  • Windows (debug only): print stacktraces on crash and run minidump

  • Windows (cygwin): if you run arangosh in a cygwin shell or via ssh we will detect this and use the appropriate output functions.

  • Windows: improve process management

  • fix IPv6 reverse ip lookups - so far we only did IPv4 addresses.

  • improve join documentation, add outer join example

  • run jslint for unit tests too, to prevent "memory leaks" by global js objects with native code.

  • fix error logging for exceptions - we wouldn't log the exception message itself so far.

  • improve error reporting in the http client (Windows & *nix)

  • improve error reports in cluster

  • Standard errors can now contain custom messages.

v2.4.7 (XXXX-XX-XX)

  • fixed issue #1282: Geo WITHIN_RECTANGLE for nested lat/lng

v2.4.6 (2015-03-18)

  • added option --database.ignore-logfile-errors

    This option controls how collection datafiles with a CRC mismatch are treated.

    If set to false, CRC mismatch errors in collection datafiles will lead to a collection not being loaded at all. If a collection needs to be loaded during WAL recovery, the WAL recovery will also abort (if not forced with --wal.ignore-recovery-errors true). Setting this flag to false protects users from unintentionally using a collection with corrupted datafiles, from which only a subset of the original data can be recovered.

    If set to true, CRC mismatch errors in collection datafiles will lead to the datafile being partially loaded. All data up to until the mismatch will be loaded. This will enable users to continue with a collection datafiles that are corrupted, but will result in only a partial load of the data. The WAL recovery will still abort when encountering a collection with a corrupted datafile, at least if --wal.ignore-recovery-errors is not set to true.

    The default value is true, so for collections with corrupted datafiles there might be partial data loads once the WAL recovery has finished. If the WAL recovery will need to load a collection with a corrupted datafile, it will still stop when using the default values.

  • INCOMPATIBLE CHANGE:

    make the arangod server refuse to start if during startup it finds a non-readable parameter.json file for a database or a collection.

    Stopping the startup process in this case requires manual intervention (fixing the unreadable files), but prevents follow-up errors due to ignored databases or collections from happening.

  • datafiles and parameter.json files written by arangod are now created with read and write privileges for the arangod process user, and with read and write privileges for the arangod process group.

    Previously, these files were created with user read and write permissions only.

  • INCOMPATIBLE CHANGE:

    abort WAL recovery if one of the collection's datafiles cannot be opened

  • INCOMPATIBLE CHANGE:

    never try to raise the privileges after dropping them, this can lead to a race condition while running the recovery

    If you require to run ArangoDB on a port lower than 1024, you must run ArangoDB as root.

  • fixed inefficiencies in remove methods of general-graph module

  • added option --database.slow-query-threshold for controlling the default AQL slow query threshold value on server start

v2.4.5 (2015-03-16)

  • added elapsed time to HTTP request logging output (--log.requests-file)

  • added AQL current and slow query tracking, killing of AQL queries

    This change enables retrieving the list of currently running AQL queries inside the selected database. AQL queries with an execution time beyond a certain threshold can be moved to a "slow query" facility and retrieved from there. Queries can also be killed by specifying the query id.

    This change adds the following HTTP REST APIs:

    • GET /_api/query/current: for retrieving the list of currently running queries
    • GET /_api/query/slow: for retrieving the list of slow queries
    • DELETE /_api/query/slow: for clearing the list of slow queries
    • GET /_api/query/properties: for retrieving the properties for query tracking
    • PUT /_api/query/properties: for adjusting the properties for query tracking
    • DELETE /_api/query/<id>: for killing an AQL query

    The following JavaScript APIs have been added:

    • require("org/arangodb/aql/queries").current();
    • require("org/arangodb/aql/queries").slow();
    • require("org/arangodb/aql/queries").clearSlow();
    • require("org/arangodb/aql/queries").properties();
    • require("org/arangodb/aql/queries").kill();
  • fixed issue #1265: arangod crashed with SIGSEGV

  • fixed issue #1241: Wildcards in examples

  • fixed comment parsing in Foxx controllers

v2.4.4 (2015-02-24)

  • fixed the generation template for foxx apps. It now does not create deprecated functions anymore

  • add custom visitor functionality for GRAPH_NEIGHBORS function, too

  • increased default value of traversal option maxIterations to 100 times of its previous default value

v2.4.3 (2015-02-06)

  • fix multi-threading with openssl when running under Windows

  • fix timeout on socket operations when running under Windows

  • Fixed an error in Foxx routing which caused some apps that worked in 2.4.1 to fail with status 500: undefined is not a function errors in 2.4.2 This error was occurring due to seldom internal rerouting introduced by the malformed application handler.

v2.4.2 (2015-01-30)

  • added custom visitor functionality for AQL traversals

    This allows more complex result processing in traversals triggered by AQL. A few examples are shown in this article.

  • improved number of results estimated for nodes of type EnumerateListNode and SubqueryNode in AQL explain output

  • added AQL explain helper to explain arbitrary AQL queries

    The helper function prints the query execution plan and the indexes to be used in the query. It can be invoked from the ArangoShell or the web interface as follows:

    require("org/arangodb/aql/explainer").explain(query);
    
  • enable use of indexes for certain AQL conditions with non-equality predicates, in case the condition(s) also refer to indexed attributes

    The following queries will now be able to use indexes:

    FILTER a.indexed == ... && a.indexed != ...
    FILTER a.indexed == ... && a.nonIndexed != ...
    FILTER a.indexed == ... && ! (a.indexed == ...)
    FILTER a.indexed == ... && ! (a.nonIndexed == ...)
    FILTER a.indexed == ... && ! (a.indexed != ...)
    FILTER a.indexed == ... && ! (a.nonIndexed != ...)
    FILTER (a.indexed == ... && a.nonIndexed == ...) || (a.indexed == ... && a.nonIndexed == ...)
    FILTER (a.indexed == ... && a.nonIndexed != ...) || (a.indexed == ... && a.nonIndexed != ...)
    
  • Fixed spuriously occurring "collection not found" errors when running queries on local collections on a cluster DB server

  • Fixed upload of Foxx applications to the server for apps exceeding approx. 1 MB zipped.

  • Malformed Foxx applications will now return a more useful error when any route is requested.

    In Production a Foxx app mounted on /app will display an html page on /app/* stating a 503 Service temporarily not available. It will not state any information about your Application. Before it was a 404 Not Found without any information and not distinguishable from a correct not found on your route.

    In Development Mode the html page also contains information about the error occurred.

  • Unhandled errors thrown in Foxx routes are now handled by the Foxx framework itself.

    In Production the route will return a status 500 with a body {error: "Error statement"}. In Development the route will return a status 500 with a body {error: "Error statement", stack: "..."}

    Before, it was status 500 with a plain text stack including ArangoDB internal routing information.

  • The Applications tab in web interface will now request development apps more often. So if you have a fixed a syntax error in your app it should always be visible after reload.

v2.4.1 (2015-01-19)

  • improved WAL recovery output

  • fixed certain OR optimizations in AQL optimizer

  • better diagnostics for arangoimp

  • fixed invalid result of HTTP REST API method /_admin/foxx/rescan

  • fixed possible segmentation fault when passing a Buffer object into a V8 function as a parameter

  • updated AQB module to 1.8.0.

v2.4.0 (2015-01-13)

  • updated AQB module to 1.7.0.

  • fixed V8 integration-related crashes

  • make fs.move(src, dest) also fail when both src and dest are existing directories. This ensures the same behavior of the move operation on different platforms.

  • fixed AQL insert operation for multi-shard collections in cluster

  • added optional return value for AQL data-modification queries. This allows returning the documents inserted, removed or updated with the query, e.g.

    FOR doc IN docs REMOVE doc._key IN docs LET removed = OLD RETURN removed
    FOR doc IN docs INSERT { } IN docs LET inserted = NEW RETURN inserted
    FOR doc IN docs UPDATE doc._key WITH { } IN docs LET previous = OLD RETURN previous
    FOR doc IN docs UPDATE doc._key WITH { } IN docs LET updated = NEW RETURN updated
    

    The variables OLD and NEW are automatically available when a REMOVE, INSERT, UPDATE or REPLACE statement is immediately followed by a LET statement. Note that the LET and RETURN statements in data-modification queries are not as flexible as the general versions of LET and RETURN. When returning documents from data-modification operations, only a single variable can be assigned using LET, and the assignment can only be either OLD or NEW, but not an arbitrary expression. The RETURN statement also allows using the just-created variable only, and no arbitrary expressions.

v2.4.0-beta1 (2014-12-26)

  • fixed superstates in FoxxGenerator

  • fixed issue #1065: Aardvark: added creation of documents and edges with _key property

  • fixed issue #1198: Aardvark: current AQL editor query is now cached

  • Upgraded V8 version from 3.16.14 to 3.29.59

    The built-in version of V8 has been upgraded from 3.16.14 to 3.29.59. This activates several ES6 (also dubbed Harmony or ES.next) features in ArangoDB, both in the ArangoShell and the ArangoDB server. They can be used for scripting and in server-side actions such as Foxx routes, traversals etc.

    The following ES6 features are available in ArangoDB 2.4 by default:

    • iterators
    • the of operator
    • symbols
    • predefined collections types (Map, Set etc.)
    • typed arrays

    Many other ES6 features are disabled by default, but can be made available by starting arangod or arangosh with the appropriate options:

    • arrow functions
    • proxies
    • generators
    • String, Array, and Number enhancements
    • constants
    • enhanced object and numeric literals

    To activate all these ES6 features in arangod or arangosh, start it with the following options:

    arangosh --javascript.v8-options="--harmony --harmony_generators"
    

    More details on the available ES6 features can be found in this blog.

  • Added Foxx generator for building Hypermedia APIs

    A more detailed description is here

  • New Applications tab in web interface:

    The applications tab got a complete redesign. It will now only show applications that are currently running on ArangoDB. For a selected application, a new detailed view has been created. This view provides a better overview of the app:

    • author
    • license
    • version
    • contributors
    • download links
    • API documentation

    To install a new application, a new dialog is now available. It provides the features already available in the console application foxx-manager plus some more:

    • install an application from Github
    • install an application from a zip file
    • install an application from ArangoDB's application store
    • create a new application from scratch: this feature uses a generator to create a Foxx application with pre-defined CRUD methods for a given list of collections. The generated Foxx app can either be downloaded as a zip file or be installed on the server. Starting with a new Foxx app has never been easier.
  • fixed issue #1102: Aardvark: Layout bug in documents overview

    The documents overview was entirely destroyed in some situations on Firefox. We replaced the plugin we used there.

  • fixed issue #1168: Aardvark: pagination buttons jumping

  • fixed issue #1161: Aardvark: Click on Import JSON imports previously uploaded file

  • removed configure options --enable-all-in-one-v8, --enable-all-in-one-icu, and --enable-all-in-one-libev.

  • global internal rename to fix naming incompatibilities with JSON:

    Internal functions with names containing array have been renamed to object, internal functions with names containing list have been renamed to array. The renaming was mainly done in the C++ parts. The documentation has also been adjusted so that the correct JSON type names are used in most places.

    The change also led to the addition of a few function aliases in AQL:

    • TO_LIST now is an alias of the new TO_ARRAY
    • IS_LIST now is an alias of the new IS_ARRAY
    • IS_DOCUMENT now is an alias of the new IS_OBJECT

    The changed also renamed the option mergeArrays to mergeObjects for AQL data-modification query options and HTTP document modification API

  • AQL: added optimizer rule "remove-filter-covered-by-index"

    This rule removes FilterNodes and CalculationNodes from an execution plan if the filter is already covered by a previous IndexRangeNode. Removing the CalculationNode and the FilterNode will speed up query execution because the query requires less computation.

  • AQL: added optimizer rule "remove-sort-rand"

    This rule removes a SORT RAND() expression from a query and moves the random iteration into the appropriate EnumerateCollectionNode. This is more efficient than individually enumerating and then sorting randomly.

  • AQL: range optimizations for IN and OR

    This change enables usage of indexes for several additional cases. Filters containing the IN operator can now make use of indexes, and multiple OR- or AND-combined filter conditions can now also use indexes if the filters are accessing the same indexed attribute.

    Here are a few examples of queries that can now use indexes but couldn't before:

    FOR doc IN collection FILTER doc.indexedAttribute == 1 || doc.indexedAttribute > 99 RETURN doc

    FOR doc IN collection FILTER doc.indexedAttribute IN [ 3, 42 ] || doc.indexedAttribute > 99 RETURN doc

    FOR doc IN collection FILTER (doc.indexedAttribute > 2 && doc.indexedAttribute < 10) || (doc.indexedAttribute > 23 && doc.indexedAttribute < 42) RETURN doc

  • fixed issue #500: AQL parentheses issue

    This change allows passing subqueries as AQL function parameters without using duplicate brackets (e.g. FUNC(query) instead of FUNC((query))

  • added optional COUNT clause to AQL COLLECT

    This allows more efficient group count calculation queries, e.g.

    FOR doc IN collection
      COLLECT age = doc.age WITH COUNT INTO length
      RETURN { age: age, count: length }
    

    A count-only query is also possible:

    FOR doc IN collection
      COLLECT WITH COUNT INTO length
      RETURN length
    
  • fixed missing makeDirectory when fetching a Foxx application from a zip file

  • fixed issue #1134: Change the default endpoint to localhost

    This change will modify the IP address ArangoDB listens on to 127.0.0.1 by default. This will make new ArangoDB installations unaccessible from clients other than localhost unless changed. This is a security feature.

    To make ArangoDB accessible from any client, change the server's configuration (--server.endpoint) to either tcp://0.0.0.0:8529 or the server's publicly visible IP address.

  • deprecated Repository#modelPrototype. Use Repository#model instead.

  • IMPORTANT CHANGE: by default, system collections are included in replication and all replication API return values. This will lead to user accounts and credentials data being replicated from master to slave servers. This may overwrite slave-specific database users.

    If this is undesired, the _users collection can be excluded from replication easily by setting the includeSystem attribute to false in the following commands:

    • replication.sync({ includeSystem: false });
    • replication.applier.properties({ includeSystem: false });

    This will exclude all system collections (including _aqlfunctions, _graphs etc.) from the initial synchronization and the continuous replication.

    If this is also undesired, it is also possible to specify a list of collections to exclude from the initial synchronization and the continuous replication using the restrictCollections attribute, e.g.:

    replication.applier.properties({
      includeSystem: true,
      restrictType: "exclude",
      restrictCollections: [ "_users", "_graphs", "foo" ]
    });
    

    The HTTP API methods for fetching the replication inventory and for dumping collections also support the includeSystem control flag via a URL parameter.

  • removed DEPRECATED replication methods:

    • replication.logger.start()
    • replication.logger.stop()
    • replication.logger.properties()
    • HTTP PUT /_api/replication/logger-start
    • HTTP PUT /_api/replication/logger-stop
    • HTTP GET /_api/replication/logger-config
    • HTTP PUT /_api/replication/logger-config
  • fixed issue #1174, which was due to locking problems in distributed AQL execution

  • improved cluster locking for AQL avoiding deadlocks

  • use DistributeNode for modifying queries with REPLACE and UPDATE, if possible

v2.3.6 (2015-XX-XX)

  • fixed AQL subquery optimization that produced wrong result when multiple subqueries directly followed each other and and a directly following LET statement did refer to any but the first subquery.

v2.3.5 (2015-01-16)

  • fixed intermittent 404 errors in Foxx apps after mounting or unmounting apps

  • fixed issue #1200: Expansion operator results in "Cannot call method 'forEach' of null"

  • fixed issue #1199: Cannot unlink root node of plan

v2.3.4 (2014-12-23)

  • fixed cerberus path for MyArangoDB

v2.3.3 (2014-12-17)

  • fixed error handling in instantiation of distributed AQL queries, this also fixes a bug in cluster startup with many servers

  • issue #1185: parse non-fractional JSON numbers with exponent (e.g. 4e-261)

  • issue #1159: allow --server.request-timeout and --server.connect-timeout of 0

v2.3.2 (2014-12-09)

  • fixed issue #1177: Fix bug in the user app's storage

  • fixed issue #1173: AQL Editor "Save current query" resets user password

  • fixed missing makeDirectory when fetching a Foxx application from a zip file

  • put in warning about default changed: fixed issue #1134: Change the default endpoint to localhost

  • fixed issue #1163: invalid fullCount value returned from AQL

  • fixed range operator precedence

  • limit default maximum number of plans created by AQL optimizer to 256 (from 1024)

  • make AQL optimizer not generate an extra plan if an index can be used, but modify existing plans in place

  • fixed AQL cursor ttl (time-to-live) issue

    Any user-specified cursor ttl value was not honored since 2.3.0.

  • fixed segfault in AQL query hash index setup with unknown shapes

  • fixed memleaks

  • added AQL optimizer rule for removing INTO from a COLLECT statement if not needed

  • fixed issue #1131

    This change provides the KEEP clause for COLLECT ... INTO. The KEEP clause allows controlling which variables will be kept in the variable created by INTO.

  • fixed issue #1147, must protect dispatcher ID for etcd

v2.3.1 (2014-11-28)

  • recreate password if missing during upgrade

  • fixed issue #1126

  • fixed non-working subquery index optimizations

  • do not restrict summary of Foxx applications to 60 characters

  • fixed display of "required" path parameters in Foxx application documentation

  • added more optimizations of constants values in AQL FILTER conditions

  • fixed invalid or-to-in optimization for FILTERs containing comparisons with boolean values

  • fixed replication of _graphs collection

  • added AQL list functions PUSH, POP, UNSHIFT, SHIFT, REMOVE_VALUES, REMOVE_VALUE, REMOVE_NTH and APPEND

  • added AQL functions CALL and APPLY to dynamically call other functions

  • fixed AQL optimizer cost estimation for LIMIT node

  • prevent Foxx queues from permanently writing to the journal even when server is idle

  • fixed AQL COLLECT statement with INTO clause, which copied more variables than v2.2 and thus lead to too much memory consumption. This deals with #1107.

  • fixed AQL COLLECT statement, this concerned every COLLECT statement, only the first group had access to the values of the variables before the COLLECT statement. This deals with #1127.

  • fixed some AQL internals, where sometimes too many items were fetched from upstream in the presence of a LIMIT clause. This should generally improve performance.

v2.3.0 (2014-11-18)

  • fixed syslog flags. --log.syslog is deprecated and setting it has no effect, --log.facility now works as described. Application name has been changed from triagens to arangod. It can be changed using --log.application. The syslog will only contain the actual log message. The datetime prefix is omitted.

  • fixed deflate in SimpleHttpClient

  • fixed issue #1104: edgeExamples broken or changed

  • fixed issue #1103: Error while importing user queries

  • fixed issue #1100: AQL: HAS() fails on doc[attribute_name]

  • fixed issue #1098: runtime error when creating graph vertex

  • hide system applications in Applications tab by default

    Display of system applications can be toggled by using the system applications toggle in the UI.

  • added HTTP REST API for managing tasks (/_api/tasks)

  • allow passing character lists as optional parameter to AQL functions TRIM, LTRIM and RTRIM

    These functions now support trimming using custom character lists. If no character lists are specified, all whitespace characters will be removed as previously:

    TRIM("  foobar\t \r\n ")         // "foobar"
    TRIM(";foo;bar;baz, ", "; ")     // "foo;bar;baz"
    
  • added AQL string functions LTRIM, RTRIM, FIND_FIRST, FIND_LAST, SPLIT, SUBSTITUTE

  • added AQL functions ZIP, VALUES and PERCENTILE

  • made AQL functions CONCAT and CONCAT_SEPARATOR work with list arguments

  • dynamically create extra dispatcher threads if required

  • fixed issue #1097: schemas in the API docs no longer show required properties as optional

v2.3.0-beta2 (2014-11-08)

  • front-end: new icons for uploading and downloading JSON documents into a collection

  • front-end: fixed documents pagination css display error

  • front-end: fixed flickering of the progress view

  • front-end: fixed missing event for documents filter function

  • front-end: jsoneditor: added CMD+Return (Mac) CTRL+Return (Linux/Win) shortkey for saving a document

  • front-end: added information tooltip for uploading json documents.

  • front-end: added database management view to the collapsed navigation menu

  • front-end: added collection truncation feature

  • fixed issue #1086: arangoimp: Odd errors if arguments are not given properly

  • performance improvements for AQL queries that use JavaScript-based expressions internally

  • added AQL geo functions WITHIN_RECTANGLE and IS_IN_POLYGON

  • fixed non-working query results download in AQL editor of web interface

  • removed debug print message in AQL editor query export routine

  • fixed issue #1075: Aardvark: user name required even if auth is off #1075

    The fix for this prefills the username input field with the current user's account name if any and root (the default username) otherwise. Additionally, the tooltip text has been slightly adjusted.

  • fixed issue #1069: Add 'raw' link to swagger ui so that the raw swagger json can easily be retrieved

    This adds a link to the Swagger API docs to an application's detail view in the Applications tab of the web interface. The link produces the Swagger JSON directly. If authentication is turned on, the link requires authentication, too.

  • documentation updates

v2.3.0-beta1 (2014-11-01)

  • added dedicated NOT IN operator for AQL

    Previously, a NOT IN was only achievable by writing a negated IN condition:

    FOR i IN ... FILTER ! (i IN [ 23, 42 ]) ...
    

    This can now alternatively be expressed more intuitively as follows:

    FOR i IN ... FILTER i NOT IN [ 23, 42 ] ...
    
  • added alternative logical operator syntax for AQL

    Previously, the logical operators in AQL could only be written as:

    • &&: logical and
    • ||: logical or
    • !: negation

    ArangoDB 2.3 introduces the alternative variants for these operators:

    • AND: logical and
    • OR: logical or
    • NOT: negation

    The new syntax is just an alternative to the old syntax, allowing easier migration from SQL. The old syntax is still fully supported and will be.

  • improved output of ArangoStatement.parse() and POST /_api/query

    If an AQL query can be parsed without problems, The return value of ArangoStatement.parse() now contains an attribute ast with the abstract syntax tree of the query (before optimizations). Though this is an internal representation of the query and is subject to change, it can be used to inspect how ArangoDB interprets a given query.

  • improved ArangoStatement.explain() and POST /_api/explain

    The commands for explaining AQL queries have been improved.

  • added command-line option --javascript.v8-contexts to control the number of V8 contexts created in arangod.

    Previously, the number of V8 contexts was equal to the number of server threads (as specified by option --server.threads).

    However, it may be sensible to create different amounts of threads and V8 contexts. If the option is not specified, the number of V8 contexts created will be equal to the number of server threads. Thus no change in configuration is required to keep the old behavior.

    If you are using the default config files or merge them with your local config files, please review if the default number of server threads is okay in your environment. Additionally you should verify that the number of V8 contexts created (as specified in option --javascript.v8-contexts) is okay.

  • the number of server.threads specified is now the minimum of threads started. There are situation in which threads are waiting for results of distributed database servers. In this case the number of threads is dynamically increased.

  • removed index type "bitarray"

    Bitarray indexes were only half-way documented and integrated in previous versions of ArangoDB so their benefit was limited. The support for bitarray indexes has thus been removed in ArangoDB 2.3. It is not possible to create indexes of type "bitarray" with ArangoDB 2.3.

    When a collection is opened that contains a bitarray index definition created with a previous version of ArangoDB, ArangoDB will ignore it and log the following warning:

    index type 'bitarray' is not supported in this version of ArangoDB and is ignored
    

    Future versions of ArangoDB may automatically remove such index definitions so the warnings will eventually disappear.

  • removed internal "_admin/modules/flush" in order to fix requireApp

  • added basic support for handling binary data in Foxx

    Requests with binary payload can be processed in Foxx applications by using the new method res.rawBodyBuffer(). This will return the unparsed request body as a Buffer object.

    There is now also the method req.requestParts() available in Foxx to retrieve the individual components of a multipart HTTP request.

    Buffer objects can now be used when setting the response body of any Foxx action. Additionally, res.send() has been added as a convenience method for returning strings, JSON objects or buffers from a Foxx action:

    res.send("<p>some HTML</p>");
    res.send({ success: true });
    res.send(new Buffer("some binary data"));
    

    The convenience method res.sendFile() can now be used to easily return the contents of a file from a Foxx action:

    res.sendFile(applicationContext.foxxFilename("image.png"));
    

    fs.write now accepts not only strings but also Buffer objects as second parameter:

    fs.write(filename, "some data");
    fs.write(filename, new Buffer("some binary data"));
    

    fs.readBuffer can be used to return the contents of a file in a Buffer object.

  • improved performance of insertion into non-unique hash indexes significantly in case many duplicate keys are used in the index

  • issue #1042: set time zone in log output

    the command-line option --log.use-local-time was added to print dates and times in the server-local timezone instead of UTC

  • command-line options that require a boolean value now validate the value given on the command-line

    This prevents issues if no value is specified for an option that requires a boolean value. For example, the following command-line would have caused trouble in 2.2, because --server.endpoint would have been used as the value for the --server.disable-authentication options (which requires a boolean value):

    arangod --server.disable-authentication --server.endpoint tcp://127.0.0.1:8529 data
    

    In 2.3, running this command will fail with an error and requires to be modified to:

    arangod --server.disable-authentication true --server.endpoint tcp://127.0.0.1:8529 data
    
  • improved performance of CSV import in arangoimp

  • fixed issue #1027: Stack traces are off-by-one

  • fixed issue #1026: Modules loaded in different files within the same app should refer to the same module

  • fixed issue #1025: Traversal not as expected in undirected graph

  • added a _relation function in the general-graph module.

    This deprecated _directedRelation and _undirectedRelation. ArangoDB does not offer any constraints for undirected edges which caused some confusion of users how undirected relations have to be handled. Relation now only supports directed relations and the user can actively simulate undirected relations.

  • changed return value of Foxx.applicationContext#collectionName:

    Previously, the function could return invalid collection names because invalid characters were not replaced in the application name prefix, only in the collection name passed.

    Now, the function replaces invalid characters also in the application name prefix, which might to slightly different results for application names that contained any characters outside the ranges [a-z], [A-Z] and [0-9].

  • prevent XSS in AQL editor and logs view

  • integrated tutorial into ArangoShell and web interface

  • added option --backslash-escape for arangoimp when running CSV file imports

  • front-end: added download feature for (filtered) documents

  • front-end: added download feature for the results of a user query

  • front-end: added function to move documents to another collection

  • front-end: added sort-by attribute to the documents filter

  • front-end: added sorting feature to database, graph management and user management view.

  • issue #989: front-end: Databases view not refreshing after deleting a database

  • issue #991: front-end: Database search broken

  • front-end: added infobox which shows more information about a document (_id, _rev, _key) or an edge (_id, _rev, _key, _from, _to). The from and to attributes are clickable and redirect to their document location.

  • front-end: added edit-mode for deleting multiple documents at the same time.

  • front-end: added delete button to the detailed document/edge view.

  • front-end: added visual feedback for saving documents/edges inside the editor (error/success).

  • front-end: added auto-focusing for the first input field in a modal.

  • front-end: added validation for user input in a modal.

  • front-end: user defined queries are now stored inside the database and are bound to the current user, instead of using the local storage functionality of the browsers. The outcome of this is that user defined queries are now independently usable from any device. Also queries can now be edited through the standard document editor of the front-end through the _users collection.

  • front-end: added import and export functionality for user defined queries.

  • front-end: added new keywords and functions to the aql-editor theme

  • front-end: applied tile-style to the graph view

  • front-end: now using the new graph api including multi-collection support

  • front-end: foxx apps are now deletable

  • front-end: foxx apps are now installable and updateable through github, if github is their origin.

  • front-end: added foxx app version control. Multiple versions of a single foxx app are now installable and easy to manage and are also arranged in groups.

  • front-end: the user-set filter of a collection is now stored until the user navigates to another collection.

  • front-end: fetching and filtering of documents, statistics, and query operations are now handled with asynchronous ajax calls.

  • front-end: added progress indicator if the front-end is waiting for a server operation.

  • front-end: fixed wrong count of documents in the documents view of a collection.

  • front-end: fixed unexpected styling of the manage db view and navigation.

  • front-end: fixed wrong handling of select fields in a modal view.

  • front-end: fixed wrong positioning of some tooltips.

  • automatically call toJSON function of JavaScript objects (if present) when serializing them into database documents. This change allows storing JavaScript date objects in the database in a sensible manner.

v2.2.7 (2014-11-19)

  • fixed issue #998: Incorrect application URL for non-system Foxx apps

  • fixed issue #1079: AQL editor: keyword WITH in UPDATE query is not highlighted

  • fix memory leak in cluster nodes

  • fixed registration of AQL user-defined functions in Web UI (JS shell)

  • fixed error display in Web UI for certain errors (now error message is printed instead of 'undefined')

  • fixed issue #1059: bug in js module console

  • fixed issue #1056: "fs": zip functions fail with passwords

  • fixed issue #1063: Docs: measuring unit of --wal.logfile-size?

  • fixed issue #1062: Docs: typo in 14.2 Example data

v2.2.6 (2014-10-20)

  • fixed issue #972: Compilation Issue

  • fixed issue #743: temporary directories are now unique and one can read off the tool that created them, if empty, they are removed atexit

  • Highly improved performance of all AQL GRAPH_* functions.

  • Orphan collections in general graphs can now be found via GRAPH_VERTICES if either "any" or no direction is defined

  • Fixed documentation for AQL function GRAPH_NEIGHBORS. The option "vertexCollectionRestriction" is meant to filter the target vertices only, and should not filter the path.

  • Fixed a bug in GRAPH_NEIGHBORS which enforced only empty results under certain conditions

v2.2.5 (2014-10-09)

  • fixed issue #961: allow non-JSON values in undocument request bodies

  • fixed issue 1028: libicu is now statically linked

  • fixed cached lookups of collections on the server, which may have caused spurious problems after collection rename operations

v2.2.4 (2014-10-01)

  • fixed accessing _from and _to attributes in collection.byExample and collection.firstExample

    These internal attributes were not handled properly in the mentioned functions, so searching for them did not always produce documents

  • fixed issue #1030: arangoimp 2.2.3 crashing, not logging on large Windows CSV file

  • fixed issue #1025: Traversal not as expected in undirected graph

  • fixed issue #1020

    This requires re-introducing the startup option --database.force-sync-properties.

    This option can again be used to force fsyncs of collection, index and database properties stored as JSON strings on disk in files named parameter.json. Syncing these files after a write may be necessary if the underlying storage does not sync file contents by itself in a "sensible" amount of time after a file has been written and closed.

    The default value is true so collection, index and database properties will always be synced to disk immediately. This affects creating, renaming and dropping collections as well as creating and dropping databases and indexes. Each of these operations will perform an additional fsync on the parameter.json file if the option is set to true.

    It might be sensible to set this option to false for workloads that create and drop a lot of collections (e.g. test runs).

    Document operations such as creating, updating and dropping documents are not affected by this option.

  • fixed issue #1016: AQL editor bug

  • fixed issue #1014: WITHIN function returns wrong distance

  • fixed AQL shortest path calculation in function GRAPH_SHORTEST_PATH to return complete vertex objects instead of just vertex ids

  • allow changing of attributes of documents stored in server-side JavaScript variables

    Previously, the following did not work:

    var doc = db.collection.document(key);
    doc._key = "abc"; // overwriting internal attributes not supported
    doc.value = 123;  // overwriting existing attributes not supported
    

    Now, modifying documents stored in server-side variables (e.g. doc in the above case) is supported. Modifying the variables will not update the documents in the database, but will modify the JavaScript object (which can be written back to the database using db.collection.update or db.collection.replace)

  • fixed issue #997: arangoimp apparently doesn't support files >2gig on Windows

    large file support (requires using _stat64 instead of stat) is now supported on Windows

v2.2.3 (2014-09-02)

  • added around for Foxx controller

  • added type option for HTTP API GET /_api/document?collection=...

    This allows controlling the type of results to be returned. By default, paths to documents will be returned, e.g.

    [
      `/_api/document/test/mykey1`,
      `/_api/document/test/mykey2`,
      ...
    ]
    

    To return a list of document ids instead of paths, the type URL parameter can be set to id:

    [
      `test/mykey1`,
      `test/mykey2`,
      ...
    ]
    

    To return a list of document keys only, the type URL parameter can be set to key:

    [
      `mykey1`,
      `mykey2`,
      ...
    ]
    
  • properly capitalize HTTP response header field names in case the x-arango-async HTTP header was used in a request.

  • fixed several documentation issues

  • speedup for several general-graph functions, AQL functions starting with GRAPH_ and traversals

v2.2.2 (2014-08-08)

  • allow storing non-reserved attribute names starting with an underscore

    Previous versions of ArangoDB parsed away all attribute names that started with an underscore (e.g. _test', '_foo', _bar`) on all levels of a document (root level and sub-attribute levels). While this behavior was documented, it was unintuitive and prevented storing documents inside other documents, e.g.:

    {
      "_key" : "foo",
      "_type" : "mydoc",
      "references" : [
        {
          "_key" : "something",
          "_rev" : "...",
          "value" : 1
        },
        {
          "_key" : "something else",
          "_rev" : "...",
          "value" : 2
        }
      ]
    }
    

    In the above example, previous versions of ArangoDB removed all attributes and sub-attributes that started with underscores, meaning the embedded documents would lose some of their attributes. 2.2.2 should preserve such attributes, and will also allow storing user-defined attribute names on the top-level even if they start with underscores (such as _type in the above example).

  • fix conversion of JavaScript String, Number and Boolean objects to JSON.

    Objects created in JavaScript using new Number(...), new String(...), or new Boolean(...) were not converted to JSON correctly.

  • fixed a race condition on task registration (i.e. require("org/arangodb/tasks").register())

    this race condition led to undefined behavior when a just-created task with no offset and no period was instantly executed and deleted by the task scheduler, before the register function returned to the caller.

  • changed run-tests.sh to execute all suitable tests.

  • switch to new version of gyp

  • fixed upgrade button

v2.2.1 (2014-07-24)

  • fixed hanging write-ahead log recovery for certain cases that involved dropping databases

  • fixed issue with --check-version: when creating a new database the check failed

  • issue #947 Foxx applicationContext missing some properties

  • fixed issue with --check-version: when creating a new database the check failed

  • added startup option --wal.suppress-shape-information

    Setting this option to true will reduce memory and disk space usage and require less CPU time when modifying documents or edges. It should therefore be turned on for standalone ArangoDB servers. However, for servers that are used as replication masters, setting this option to true will effectively disable the usage of the write-ahead log for replication, so it should be set to false for any replication master servers.

    The default value for this option is false.

  • added optional ttl attribute to specify result cursor expiration for HTTP API method POST /_api/cursor

    The ttl attribute can be used to prevent cursor results from timing out too early.

  • issue #947: Foxx applicationContext missing some properties

  • (reported by Christian Neubauer):

    The problem was that in Google's V8, signed and unsigned chars are not always declared cleanly. so we need to force v8 to compile with forced signed chars which is done by the Flag: -fsigned-char at least it is enough to follow the instructions of compiling arango on rasperry and add "CFLAGS='-fsigned-char'" to the make command of V8 and remove the armv7=0

  • Fixed a bug with the replication client. In the case of single document transactions the collection was not write locked.

v2.2.0 (2014-07-10)

  • The replication methods logger.start, logger.stop and logger.properties are no-ops in ArangoDB 2.2 as there is no separate replication logger anymore. Data changes are logged into the write-ahead log in ArangoDB 2.2, and not separately by the replication logger. The replication logger object is still there in ArangoDB 2.2 to ensure backwards-compatibility, however, logging cannot be started, stopped or configured anymore. Using any of these methods will do nothing.

    This also affects the following HTTP API methods:

    • PUT /_api/replication/logger-start
    • PUT /_api/replication/logger-stop
    • GET /_api/replication/logger-config
    • PUT /_api/replication/logger-config

    Using any of these methods is discouraged from now on as they will be removed in future versions of ArangoDB.

  • INCOMPATIBLE CHANGE: replication of transactions has changed. Previously, transactions were logged on a master in one big block and shipped to a slave in one block, too. Now transactions will be logged and replicated as separate entries, allowing transactions to be bigger and also ensure replication progress.

    This change also affects the behavior of the stop method of the replication applier. If the replication applier is now stopped manually using the stop method and later restarted using the start method, any transactions that were unfinished at the point of stopping will be aborted on a slave, even if they later commit on the master.

    In ArangoDB 2.2, stopping the replication applier manually should be avoided unless the goal is to stop replication permanently or to do a full resync with the master anyway. If the replication applier still must be stopped, it should be made sure that the slave has fetched and applied all pending operations from a master, and that no extra transactions are started on the master before the stop command on the slave is executed.

    Replication of transactions in ArangoDB 2.2 might also lock the involved collections on the slave while a transaction is either committed or aborted on the master and the change has been replicated to the slave. This change in behavior may be important for slave servers that are used for read-scaling. In order to avoid long lasting collection locks on the slave, transactions should be kept small.

    The _replication system collection is not used anymore in ArangoDB 2.2 and its usage is discouraged.

  • INCOMPATIBLE CHANGE: the figures reported by the collection.figures method now only reflect documents and data contained in the journals and datafiles of collections. Documents or deletions contained only in the write-ahead log will not influence collection figures until the write-ahead log garbage collection kicks in. The figures for a collection might therefore underreport the total resource usage of a collection.

    Additionally, the attributes lastTick and uncollectedLogfileEntries have been added to the result of the figures operation and the HTTP API method PUT /_api/collection/figures

  • added insert method as an alias for save. Documents can now be inserted into a collection using either method:

    db.test.save({ foo: "bar" });
    db.test.insert({ foo: "bar" });
    
  • added support for data-modification AQL queries

  • added AQL keywords INSERT, UPDATE, REPLACE and REMOVE (and WITH) to support data-modification AQL queries.

    Unquoted usage of these keywords for attribute names in AQL queries will likely fail in ArangoDB 2.2. If any such attribute name needs to be used in a query, it should be enclosed in backticks to indicate the usage of a literal attribute name.

    For example, the following query will fail in ArangoDB 2.2 with a parse error:

    FOR i IN foo RETURN i.remove
    

    and needs to be rewritten like this:

    FOR i IN foo RETURN i.`remove`
    
  • disallow storing of JavaScript objects that contain JavaScript native objects of type Date, Function, RegExp or External, e.g.

    db.test.save({ foo: /bar/ });
    db.test.save({ foo: new Date() });
    

    will now print

    Error: <data> cannot be converted into JSON shape: could not shape document
    

    Previously, objects of these types were silently converted into an empty object (i.e. { }).

    To store such objects in a collection, explicitly convert them into strings like this:

    db.test.save({ foo: String(/bar/) });
    db.test.save({ foo: String(new Date()) });
    
  • The replication methods logger.start, logger.stop and logger.properties are no-ops in ArangoDB 2.2 as there is no separate replication logger anymore. Data changes are logged into the write-ahead log in ArangoDB 2.2, and not separately by the replication logger. The replication logger object is still there in ArangoDB 2.2 to ensure backwards-compatibility, however, logging cannot be started, stopped or configured anymore. Using any of these methods will do nothing.

    This also affects the following HTTP API methods:

    • PUT /_api/replication/logger-start
    • PUT /_api/replication/logger-stop
    • GET /_api/replication/logger-config
    • PUT /_api/replication/logger-config

    Using any of these methods is discouraged from now on as they will be removed in future versions of ArangoDB.

  • INCOMPATIBLE CHANGE: replication of transactions has changed. Previously, transactions were logged on a master in one big block and shipped to a slave in one block, too. Now transactions will be logged and replicated as separate entries, allowing transactions to be bigger and also ensure replication progress.

    This change also affects the behavior of the stop method of the replication applier. If the replication applier is now stopped manually using the stop method and later restarted using the start method, any transactions that were unfinished at the point of stopping will be aborted on a slave, even if they later commit on the master.

    In ArangoDB 2.2, stopping the replication applier manually should be avoided unless the goal is to stop replication permanently or to do a full resync with the master anyway. If the replication applier still must be stopped, it should be made sure that the slave has fetched and applied all pending operations from a master, and that no extra transactions are started on the master before the stop command on the slave is executed.

    Replication of transactions in ArangoDB 2.2 might also lock the involved collections on the slave while a transaction is either committed or aborted on the master and the change has been replicated to the slave. This change in behavior may be important for slave servers that are used for read-scaling. In order to avoid long lasting collection locks on the slave, transactions should be kept small.

    The _replication system collection is not used anymore in ArangoDB 2.2 and its usage is discouraged.

  • INCOMPATIBLE CHANGE: the figures reported by the collection.figures method now only reflect documents and data contained in the journals and datafiles of collections. Documents or deletions contained only in the write-ahead log will not influence collection figures until the write-ahead log garbage collection kicks in. The figures for a collection might therefore underreport the total resource usage of a collection.

    Additionally, the attributes lastTick and uncollectedLogfileEntries have been added to the result of the figures operation and the HTTP API method PUT /_api/collection/figures

  • added insert method as an alias for save. Documents can now be inserted into a collection using either method:

    db.test.save({ foo: "bar" });
    db.test.insert({ foo: "bar" });
    
  • added support for data-modification AQL queries

  • added AQL keywords INSERT, UPDATE, REPLACE and REMOVE (and WITH) to support data-modification AQL queries.

    Unquoted usage of these keywords for attribute names in AQL queries will likely fail in ArangoDB 2.2. If any such attribute name needs to be used in a query, it should be enclosed in backticks to indicate the usage of a literal attribute name.

    For example, the following query will fail in ArangoDB 2.2 with a parse error:

    FOR i IN foo RETURN i.remove
    

    and needs to be rewritten like this:

    FOR i IN foo RETURN i.`remove`
    
  • disallow storing of JavaScript objects that contain JavaScript native objects of type Date, Function, RegExp or External, e.g.

    db.test.save({ foo: /bar/ });
    db.test.save({ foo: new Date() });
    

    will now print

    Error: <data> cannot be converted into JSON shape: could not shape document
    

    Previously, objects of these types were silently converted into an empty object (i.e. { }).

    To store such objects in a collection, explicitly convert them into strings like this:

    db.test.save({ foo: String(/bar/) });
    db.test.save({ foo: String(new Date()) });
    
  • honor startup option --server.disable-statistics when deciding whether or not to start periodic statistics collection jobs

    Previously, the statistics collection jobs were started even if the server was started with the --server.disable-statistics flag being set to true

  • removed startup option --random.no-seed

    This option had no effect in previous versions of ArangoDB and was thus removed.

  • removed startup option --database.remove-on-drop

    This option was used for debugging only.

  • removed startup option --database.force-sync-properties

    This option is now superfluous as collection properties are now stored in the write-ahead log.

  • introduced write-ahead log

    All write operations in an ArangoDB server instance are automatically logged to the server's write-ahead log. The write-ahead log is a set of append-only logfiles, and it is used in case of a crash recovery and for replication. Data from the write-ahead log will eventually be moved into the journals or datafiles of collections, allowing the server to remove older write-ahead log logfiles. Figures of collections will be updated when data are moved from the write-ahead log into the journals or datafiles of collections.

    Cross-collection transactions in ArangoDB should benefit considerably by this change, as less writes than in previous versions are required to ensure the data of multiple collections are atomically and durably committed. All data-modifying operations inside transactions (insert, update, remove) will write their operations into the write-ahead log directly, making transactions with multiple operations also require less physical memory than in previous versions of ArangoDB, that required all transaction data to fit into RAM.

    The _trx system collection is not used anymore in ArangoDB 2.2 and its usage is discouraged.

    The data in the write-ahead log can also be used in the replication context. The _replication collection that was used in previous versions of ArangoDB to store all changes on the server is not used anymore in ArangoDB 2.2. Instead, slaves can read from a master's write-ahead log to get informed about most recent changes. This removes the need to store data-modifying operations in both the actual place and the _replication collection.

  • removed startup option --server.disable-replication-logger

    This option is superfluous in ArangoDB 2.2. There is no dedicated replication logger in ArangoDB 2.2. There is now always the write-ahead log, and it is also used as the server's replication log. Specifying the startup option --server.disable-replication-logger will do nothing in ArangoDB 2.2, but the option should not be used anymore as it might be removed in a future version.

  • changed behavior of replication logger

    There is no dedicated replication logger in ArangoDB 2.2 as there is the write-ahead log now. The existing APIs for starting and stopping the replication logger still exist in ArangoDB 2.2 for downwards-compatibility, but calling the start or stop operations are no-ops in ArangoDB 2.2. When querying the replication logger status via the API, the server will always report that the replication logger is running. Configuring the replication logger is a no-op in ArangoDB 2.2, too. Changing the replication logger configuration has no effect. Instead, the write-ahead log configuration can be changed.

  • removed MRuby integration for arangod

    ArangoDB had an experimental MRuby integration in some of the publish builds. This wasn't continuously developed, and so it has been removed in ArangoDB 2.2.

    This change has led to the following startup options being superfluous:

    • --ruby.gc-interval
    • --ruby.action-directory
    • --ruby.modules-path
    • --ruby.startup-directory

    Specifying these startup options will do nothing in ArangoDB 2.2, but the options should be avoided from now on as they might be removed in future versions.

  • reclaim index memory when last document in collection is deleted

    Previously, deleting documents from a collection did not lead to index sizes being reduced. Instead, the already allocated index memory was re-used when a collection was refilled.

    Now, index memory for primary indexes and hash indexes is reclaimed instantly when the last document from a collection is removed.

  • inlined and optimized functions in hash indexes

  • added AQL TRANSLATE function

    This function can be used to perform lookups from static lists, e.g.

    LET countryNames = { US: "United States", UK: "United Kingdom", FR: "France" }
    RETURN TRANSLATE("FR", countryNames)
    
  • fixed datafile debugger

  • fixed check-version for empty directory

  • moved try/catch block to the top of routing chain

  • added mountedApp function for foxx-manager

  • fixed issue #883: arango 2.1 - when starting multi-machine cluster, UI web does not change to cluster overview

  • fixed dfdb: should not start any other V8 threads

  • cleanup of version-check, added module org/arangodb/database-version, added --check-version option

  • fixed issue #881: [2.1.0] Bombarded (every 10 sec or so) with "WARNING format string is corrupt" when in non-system DB Dashboard

  • specialized primary index implementation to allow faster hash table rebuilding and reduce lookups in datafiles for the actual value of _key.

  • issue #862: added --overwrite option to arangoimp

  • removed number of property lookups for documents during AQL queries that access documents

  • prevent buffering of long print results in arangosh's and arangod's print command

    this change will emit buffered intermediate print results and discard the output buffer to quickly deliver print results to the user, and to prevent constructing very large buffers for large results

  • removed sorting of attribute names for use in a collection's shaper

    sorting attribute names was done on document insert to keep attributes of a collection in sorted order for faster comparisons. The sort order of attributes was only used in one particular and unlikely case, so it was removed. Collections with many different attribute names should benefit from this change by faster inserts and slightly less memory usage.

  • fixed a bug in arangodump which got the collection name in _from and _to attributes of edges wrong (all were "_unknown")

  • fixed a bug in arangorestore which did not recognize wrong _from and _to attributes of edges

  • improved error detection and reporting in arangorestore

v2.1.1 (2014-06-06)

  • fixed dfdb: should not start any other V8 threads

  • signature for collection functions was modified

    The basic change was the substitution of the input parameter of the function by an generic options object which can contain multiple option parameter of the function. Following functions were modified remove removeBySample replace replaceBySample update updateBySample

    Old signature is yet supported but it will be removed in future versions

v2.1.0 (2014-05-29)

  • implemented upgrade procedure for clusters

  • fixed communication issue with agency which prevented reconnect after an agent failure

  • fixed cluster dashboard in the case that one but not all servers in the cluster are down

  • fixed a bug with coordinators creating local database objects in the wrong order (_system needs to be done first)

  • improved cluster dashboard

v2.1.0-rc2 (2014-05-25)

  • fixed issue #864: Inconsistent behavior of AQL REVERSE(list) function

v2.1.0-rc1 (XXXX-XX-XX)

  • added server-side periodic task management functions:

    • require("org/arangodb/tasks").register(): registers a periodic task
    • require("org/arangodb/tasks").unregister(): unregisters and removes a periodic task
    • require("org/arangodb/tasks").get(): retrieves a specific tasks or all existing tasks

    the previous undocumented function internal.definePeriodic is now deprecated and will be removed in a future release.

  • decrease the size of some seldom used system collections on creation.

    This will make these collections use less disk space and mapped memory.

  • added AQL date functions

  • added AQL FLATTEN() list function

  • added index memory statistics to db.<collection>.figures() function

    The figures function will now return a sub-document indexes, which lists the number of indexes in the count sub-attribute, and the total memory usage of the indexes in bytes in the size sub-attribute.

  • added AQL CURRENT_DATABASE() function

    This function returns the current database's name.

  • added AQL CURRENT_USER() function

    This function returns the current user from an AQL query. The current user is the username that was specified in the Authorization HTTP header of the request. If authentication is turned off or the query was executed outside a request context, the function will return null.

  • fixed issue #796: Searching with newline chars broken?

    fixed slightly different handling of backslash escape characters in a few AQL functions. Now handling of escape sequences should be consistent, and searching for newline characters should work the same everywhere

  • added OpenSSL version check for configure

    It will report all OpenSSL versions < 1.0.1g as being too old. configure will only complain about an outdated OpenSSL version but not stop.

  • require C++ compiler support (requires g++ 4.8, clang++ 3.4 or Visual Studio 13)

  • less string copying returning JSONified documents from ArangoDB, e.g. via HTTP GET /_api/document/<collection>/<document>

  • issue #798: Lower case http headers from arango

    This change allows returning capitalized HTTP headers, e.g. Content-Length instead of content-length. The HTTP spec says that headers are case-insensitive, but in fact several clients rely on a specific case in response headers. This change will capitalize HTTP headers if the X-Arango-Version request header is sent by the client and contains a value of at least 20100 (for version 2.1). The default value for the compatibility can also be set at server start, using the --server.default-api-compatibility option.

  • simplified usage of db._createStatement()

    Previously, the function could not be called with a query string parameter as follows:

    db._createStatement(queryString);
    

    Calling it as above resulted in an error because the function expected an object as its parameter. From now on, it's possible to call the function with just the query string.

  • make ArangoDB not send back a WWW-Authenticate header to a client in case the client sends the X-Omit-WWW-Authenticate HTTP header.

    This is done to prevent browsers from showing their built-in HTTP authentication dialog for AJAX requests that require authentication. ArangoDB will still return an HTTP 401 (Unauthorized) if the request doesn't contain valid credentials, but it will omit the WWW-Authenticate header, allowing clients to bypass the browser's authentication dialog.

  • added REST API method HTTP GET /_api/job/job-id to query the status of an async job without potentially fetching it from the list of done jobs

  • fixed non-intuitive behavior in jobs API: previously, querying the status of an async job via the API HTTP PUT /_api/job/job-id removed a currently executing async job from the list of queryable jobs on the server. Now, when querying the result of an async job that is still executing, the job is kept in the list of queryable jobs so its result can be fetched by a subsequent request.

  • use a new data structure for the edge index of an edge collection. This improves the performance for the creation of the edge index and in particular speeds up removal of edges in graphs. Note however that this change might change the order in which edges starting at or ending in a vertex are returned. However, this order was never guaranteed anyway and it is not sensible to guarantee any particular order.

  • provide a size hint to edge and hash indexes when initially filling them this will lead to less re-allocations when populating these indexes

    this may speed up building indexes when opening an existing collection

  • don't requeue identical context methods in V8 threads in case a method is already registered

  • removed arangod command line option --database.remove-on-compacted

  • export the sort attribute for graph traversals to the HTTP interface

  • add support for arangodump/arangorestore for clusters

v2.0.8 (XXXX-XX-XX)

  • fixed too-busy iteration over skiplists

    Even when a skiplist query was restricted by a limit clause, the skiplist index was queried without the limit. this led to slower-than-necessary execution times.

  • fixed timeout overflows on 32 bit systems

    this bug has led to problems when select was called with a high timeout value (2000+ seconds) on 32bit systems that don't have a forgiving select implementation. when the call was made on these systems, select failed so no data would be read or sent over the connection

    this might have affected some cluster-internal operations.

  • fixed ETCD issues on 32 bit systems

    ETCD was non-functional on 32 bit systems at all. The first call to the watch API crashed it. This was because atomic operations worked on data structures that were not properly aligned on 32 bit systems.

  • fixed issue #848: db.someEdgeCollection.inEdge does not return correct value when called the 2nd time after a .save to the edge collection

v2.0.7 (2014-05-05)

  • issue #839: Foxx Manager missing "unfetch"

  • fixed a race condition at startup

    this fixes undefined behavior in case the logger was involved directly at startup, before the logger initialization code was called. This should have occurred only for code that was executed before the invocation of main(), e.g. during ctor calls of statically defined objects.

v2.0.6 (2014-04-22)

  • fixed issue #835: arangosh doesn't show correct database name

v2.0.5 (2014-04-21)

  • Fixed a caching problem in IE JS Shell

  • added cancelation for async jobs

  • upgraded to new gyp for V8

  • new Windows installer

v2.0.4 (2014-04-14)

  • fixed cluster authentication front-end issues for Firefox and IE, there are still problems with Chrome

v2.0.3 (2014-04-14)

  • fixed AQL optimizer bug

  • fixed front-end issues

  • added password change dialog

v2.0.2 (2014-04-06)

  • during cluster startup, do not log (somewhat expected) connection errors with log level error, but with log level info

  • fixed dashboard modals

  • fixed connection check for cluster planning front end: firefox does not support async:false

  • document how to persist a cluster plan in order to relaunch an existing cluster later

v2.0.1 (2014-03-31)

  • make ArangoDB not send back a WWW-Authenticate header to a client in case the client sends the X-Omit-WWW-Authenticate HTTP header.

    This is done to prevent browsers from showing their built-in HTTP authentication dialog for AJAX requests that require authentication. ArangoDB will still return an HTTP 401 (Unauthorized) if the request doesn't contain valid credentials, but it will omit the WWW-Authenticate header, allowing clients to bypass the browser's authentication dialog.

  • fixed isses in arango-dfdb:

    the dfdb was not able to unload certain system collections, so these couldn't be inspected with the dfdb sometimes. Additionally, it did not truncate corrupt markers from datafiles under some circumstances

  • added changePassword attribute for users

  • fixed non-working "save" button in collection edit view of web interface clicking the save button did nothing. one had to press enter in one of the input fields to send modified form data

  • fixed V8 compile error on MacOS X

  • prevent body length: -9223372036854775808 being logged in development mode for some Foxx HTTP responses

  • fixed several bugs in web interface dashboard

  • fixed issue #783: coffee script not working in manifest file

  • fixed issue #783: coffee script not working in manifest file

  • fixed issue #781: Cant save current query from AQL editor ui

  • bumped version in X-Arango-Version compatibility header sent by arangosh and other client tools from 1.5 to 2.0.

  • fixed startup options for arango-dfdb, added details option for arango-dfdb

  • fixed display of missing error messages and codes in arangosh

  • when creating a collection via the web interface, the collection type was always "document", regardless of the user's choice

v2.0.0 (2014-03-10)

  • first 2.0 release

v2.0.0-rc2 (2014-03-07)

  • fixed cluster authorization

v2.0.0-rc1 (2014-02-28)

  • added sharding :-)

  • added collection._dbName attribute to query the name of the database from a collection

    more detailed documentation on the sharding and cluster features can be found in the user manual, section Sharding

  • INCOMPATIBLE CHANGE: using complex values in AQL filter conditions with operators other than equality (e.g. >=, >, <=, <) will disable usage of skiplist indexes for filter evaluation.

    For example, the following queries will be affected by change:

    FOR doc IN docs FILTER doc.value < { foo: "bar" } RETURN doc
    FOR doc IN docs FILTER doc.value >= [ 1, 2, 3 ] RETURN doc
    

    The following queries will not be affected by the change:

    FOR doc IN docs FILTER doc.value == 1 RETURN doc
    FOR doc IN docs FILTER doc.value == "foo" RETURN doc
    FOR doc IN docs FILTER doc.value == [ 1, 2, 3 ] RETURN doc
    FOR doc IN docs FILTER doc.value == { foo: "bar" } RETURN doc
    
  • INCOMPATIBLE CHANGE: removed undocumented method collection.saveOrReplace

    this feature was never advertised nor documented nor tested.

  • INCOMPATIBLE CHANGE: removed undocumented REST API method /_api/simple/BY-EXAMPLE-HASH

    this feature was never advertised nor documented nor tested.

  • added explicit startup parameter --server.reuse-address

    This flag can be used to control whether sockets should be acquired with the SO_REUSEADDR flag.

    Regardless of this setting, sockets on Windows are always acquired using the SO_EXCLUSIVEADDRUSE flag.

  • removed undocumented REST API method GET /_admin/database-name

  • added user validation API at POST /_api/user/<username>

  • slightly improved users management API in /_api/user:

    Previously, when creating a new user via HTTP POST, the username needed to be passed in an attribute username. When users were returned via this API, the usernames were returned in an attribute named user. This was slightly confusing and was changed in 2.0 as follows:

    • when adding a user via HTTP POST, the username can be specified in an attribute user. If this attribute is not used, the API will look into the attribute username as before and use that value.
    • when users are returned via HTTP GET, the usernames are still returned in an attribute user.

    This change should be fully downwards-compatible with the previous version of the API.

  • added AQL SLICE function to extract slices from lists

  • made module loader more node compatible

  • the startup option --javascript.package-path for arangosh is now deprecated and does nothing. Using it will not cause an error, but the option is ignored.

  • added coffee script support

  • Several UI improvements.

  • Exchanged icons in the graphviewer toolbar

  • always start networking and HTTP listeners when starting the server (even in console mode)

  • allow vertex and edge filtering with user-defined functions in TRAVERSAL, TRAVERSAL_TREE and SHORTEST_PATH AQL functions:

    // using user-defined AQL functions for edge and vertex filtering
    RETURN TRAVERSAL(friends, friendrelations, "friends/john", "outbound", {
      followEdges: "myfunctions::checkedge",
      filterVertices: "myfunctions::checkvertex"
    })
    
    // using the following custom filter functions
    var aqlfunctions = require("org/arangodb/aql/functions");
    aqlfunctions.register("myfunctions::checkedge", function (config, vertex, edge, path) {
      return (edge.type !== 'dislikes'); // don't follow these edges
    }, false);
    
    aqlfunctions.register("myfunctions::checkvertex", function (config, vertex, path) {
      if (vertex.isDeleted || ! vertex.isActive) {
        return [ "prune", "exclude" ]; // exclude these and don't follow them
      }
      return [ ]; // include everything else
    }, false);
    
  • fail if invalid strategy, order or itemOrder attribute values are passed to the AQL TRAVERSAL function. Omitting these attributes is not considered an error, but specifying an invalid value for any of these attributes will make an AQL query fail.

  • issue #751: Create database through API should return HTTP status code 201

    By default, the server now returns HTTP 201 (created) when creating a new database successfully. To keep compatibility with older ArangoDB versions, the startup parameter --server.default-api-compatibility can be set to a value of 10400 to indicate API compatibility with ArangoDB 1.4. The compatibility can also be enforced by setting the X-Arango-Version HTTP header in a client request to this API on a per-request basis.

  • allow direct access from the db object to collections whose names start with an underscore (e.g. db._users).

    Previously, access to such collections via the db object was possible from arangosh, but not from arangod (and thus Foxx and actions). The only way to access such collections from these places was via the db._collection(<name>) workaround.

  • allow \n (as well as \r\n) as line terminator in batch requests sent to /_api/batch HTTP API.

  • use --data-binary instead of --data parameter in generated cURL examples

  • issue #703: Also show path of logfile for fm.config()

  • issue #675: Dropping a collection used in "graph" module breaks the graph

  • added "static" Graph.drop() method for graphs API

  • fixed issue #695: arangosh server.password error

  • use pretty-printing in --console mode by default

  • simplified ArangoDB startup options

    Some startup options are now superfluous or their usage is simplified. The following options have been changed:

    • --javascript.modules-path: this option has been removed. The modules paths are determined by arangod and arangosh automatically based on the value of --javascript.startup-directory.

      If the option is set on startup, it is ignored so startup will not abort with an error unrecognized option.

    • --javascript.action-directory: this option has been removed. The actions directory is determined by arangod automatically based on the value of --javascript.startup-directory.

      If the option is set on startup, it is ignored so startup will not abort with an error unrecognized option.

    • --javascript.package-path: this option is still available but it is not required anymore to set the standard package paths (e.g. js/npm). arangod will automatically use this standard package path regardless of whether it was specified via the options.

      It is possible to use this option to add additional package paths to the standard value.

    Configuration files included with arangod are adjusted accordingly.

  • layout of the graphs tab adapted to better fit with the other tabs

  • database selection is moved to the bottom right corner of the web interface

  • removed priority queue index type

    this feature was never advertised nor documented nor tested.

  • display internal attributes in document source view of web interface

  • removed separate shape collections

    When upgrading to ArangoDB 2.0, existing collections will be converted to include shapes and attribute markers in the datafiles instead of using separate files for shapes.

    When a collection is converted, existing shapes from the SHAPES directory will be written to a new datafile in the collection directory, and the SHAPES directory will be removed afterwards.

    This saves up to 2 MB of memory and disk space for each collection (savings are higher, the less different shapes there are in a collection). Additionally, one less file descriptor per opened collection will be used.

    When creating a new collection, the amount of sync calls may be reduced. The same may be true for documents with yet-unknown shapes. This may help performance in these cases.

  • added AQL functions NTH and POSITION

  • added signal handler for arangosh to save last command in more cases

  • added extra prompt placeholders for arangosh:

    • %e: current endpoint
    • %u: current user
  • added arangosh option --javascript.gc-interval to control amount of garbage collection performed by arangosh

  • fixed issue #651: Allow addEdge() to take vertex ids in the JS library

  • removed command-line option --log.format

    In previous versions, this option did not have an effect for most log messages, so it got removed.

  • removed C++ logger implementation

    Logging inside ArangoDB is now done using the LOG_XXX() macros. The LOGGER_XXX() macros are gone.

  • added collection status "loading"

v1.4.16 (XXXX-XX-XX)

  • fixed too eager datafile deletion

    this issue could have caused a crash when the compaction had marked datafiles as obsolete and they were removed while "old" temporary query results still pointed to the old datafile positions

  • fixed issue #826: Replication fails when a collection's configuration changes

v1.4.15 (2014-04-19)

  • bugfix for AQL query optimizer

    the following type of query was too eagerly optimized, leading to errors in code-generation:

    LET a = (FOR i IN [] RETURN i) LET b = (FOR i IN [] RETURN i) RETURN 1
    

    the problem occurred when both lists in the subqueries were empty. In this case invalid code was generated and the query couldn't be executed.

v1.4.14 (2014-04-05)

  • fixed race conditions during shape / attribute insertion

    A race condition could have led to spurious cannot find attribute #xx or cannot find shape #xx (where xx is a number) warning messages being logged by the server. This happened when a new attribute was inserted and at the same time was queried by another thread.

    Also fixed a race condition that may have occurred when a thread tried to access the shapes / attributes hash tables while they were resized. In this cases, the shape / attribute may have been hashed to a wrong slot.

  • fixed a memory barrier / cpu synchronization problem with libev, affecting Windows with Visual Studio 2013 (probably earlier versions are affected, too)

    The issue is described in detail here: http://lists.schmorp.de/pipermail/libev/2014q1/002318.html

v1.4.13 (2014-03-14)

  • added diagnostic output for Foxx application upload

  • allow dump & restore from ArangoDB 1.4 with an ArangoDB 2.0 server

  • allow startup options temp-path and default-language to be specified from the arangod configuration file and not only from the command line

  • fixed too eager compaction

    The compaction will now wait for several seconds before trying to re-compact the same collection. Additionally, some other limits have been introduced for the compaction.

v1.4.12 (2014-03-05)

  • fixed display bug in web interface which caused the following problems:

    • documents were displayed in web interface as being empty
    • document attributes view displayed many attributes with content "undefined"
    • document source view displayed many attributes with name "TYPEOF" and value "undefined"
    • an alert popping up in the browser with message "Datatables warning..."
  • re-introduced old-style read-write locks to supports Windows versions older than Windows 2008R2 and Windows 7. This should re-enable support for Windows Vista and Windows 2008.

v1.4.11 (2014-02-27)

  • added SHORTEST_PATH AQL function

    this calculates the shortest paths between two vertices, using the Dijkstra algorithm, employing a min-heap

    By default, ArangoDB does not know the distance between any two vertices and will use a default distance of 1. A custom distance function can be registered as an AQL user function to make the distance calculation use any document attributes or custom logic:

    RETURN SHORTEST_PATH(cities, motorways, "cities/CGN", "cities/MUC", "outbound", {
      paths: true,
      distance: "myfunctions::citydistance"
    })
    
    // using the following custom distance function
    var aqlfunctions = require("org/arangodb/aql/functions");
    aqlfunctions.register("myfunctions::distance", function (config, vertex1, vertex2, edge) {
      return Math.sqrt(Math.pow(vertex1.x - vertex2.x) + Math.pow(vertex1.y - vertex2.y));
    }, false);
    
  • fixed bug in Graph.pathTo function

  • fixed small memleak in AQL optimizer

  • fixed access to potentially uninitialized variable when collection had a cap constraint

v1.4.10 (2014-02-21)

  • fixed graph constructor to allow graph with some parameter to be used

  • added node.js "events" and "stream"

  • updated npm packages

  • added loading of .json file

  • Fixed http return code in graph api with waitForSync parameter.

  • Fixed documentation in graph, simple and index api.

  • removed 2 tests due to change in ruby library.

  • issue #756: set access-control-expose-headers on CORS response

    the following headers are now whitelisted by ArangoDB in CORS responses:

    • etag
    • content-encoding
    • content-length
    • location
    • server
    • x-arango-errors
    • x-arango-async-id

v1.4.9 (2014-02-07)

  • return a document's current etag in response header for HTTP HEAD requests on documents that return an HTTP 412 (precondition failed) error. This allows retrieving the document's current revision easily.

  • added AQL function SKIPLIST to directly access skiplist indexes from AQL

    This is a shortcut method to use a skiplist index for retrieving specific documents in indexed order. The function capability is rather limited, but it may be used for several cases to speed up queries. The documents are returned in index order if only one condition is used.

    /* return all documents with mycollection.created > 12345678 */
    FOR doc IN SKIPLIST(mycollection, { created: [[ '>', 12345678 ]] })
      RETURN doc
    
    /* return first document with mycollection.created > 12345678 */
    FOR doc IN SKIPLIST(mycollection, { created: [[ '>', 12345678 ]] }, 0, 1)
      RETURN doc
    
    /* return all documents with mycollection.created between 12345678 and 123456790 */
    FOR doc IN SKIPLIST(mycollection, { created: [[ '>', 12345678 ], [ '<=', 123456790 ]] })
      RETURN doc
    
    /* return all documents with mycollection.a equal 1 and .b equal 2 */
    FOR doc IN SKIPLIST(mycollection, { a: [[ '==', 1 ]], b: [[ '==', 2 ]] })
      RETURN doc
    

    The function requires a skiplist index with the exact same attributes to be present on the specified collection. All attributes present in the skiplist index must be specified in the conditions specified for the SKIPLIST function. Attribute declaration order is important, too: attributes must be specified in the same order in the condition as they have been declared in the skiplist index.

  • added command-line option --server.disable-authentication-unix-sockets

    with this option, authentication can be disabled for all requests coming in via UNIX domain sockets, enabling clients located on the same host as the ArangoDB server to connect without authentication. Other connections (e.g. TCP/IP) are not affected by this option.

    The default value for this option is false. Note: this option is only supported on platforms that support Unix domain sockets.

  • call global arangod instance destructor on shutdown

  • issue #755: TRAVERSAL does not use strategy, order and itemOrder options

    these options were not honored when configuring a traversal via the AQL TRAVERSAL function. Now, these options are used if specified.

  • allow vertex and edge filtering with user-defined functions in TRAVERSAL, TRAVERSAL_TREE and SHORTEST_PATH AQL functions:

    // using user-defined AQL functions for edge and vertex filtering
    RETURN TRAVERSAL(friends, friendrelations, "friends/john", "outbound", {
      followEdges: "myfunctions::checkedge",
      filterVertices: "myfunctions::checkvertex"
    })
    
    // using the following custom filter functions
    var aqlfunctions = require("org/arangodb/aql/functions");
    aqlfunctions.register("myfunctions::checkedge", function (config, vertex, edge, path) {
      return (edge.type !== 'dislikes'); // don't follow these edges
    }, false);
    
    aqlfunctions.register("myfunctions::checkvertex", function (config, vertex, path) {
      if (vertex.isDeleted || ! vertex.isActive) {
        return [ "prune", "exclude" ]; // exclude these and don't follow them
      }
      return [ ]; // include everything else
    }, false);
    
  • issue #748: add vertex filtering to AQL's TRAVERSAL_TREE function

v1.4.8 (2014-01-31)

  • install foxx apps in the web interface

  • fixed a segfault in the import API

v1.4.7 (2014-01-23)

  • issue #744: Add usage example arangoimp from Command line

  • issue #738: added __dirname, __filename pseudo-globals. Fixes #733. (@by pluma)

  • mount all Foxx applications in system apps directory on startup

v1.4.6 (2014-01-20)

  • issue #736: AQL function to parse collection and key from document handle

  • added fm.rescan() method for Foxx-Manager

  • fixed issue #734: foxx cookie and route problem

  • added method fm.configJson for arangosh

  • include startupPath in result of API /_api/foxx/config

v1.4.5 (2014-01-15)

  • fixed issue #726: Alternate Windows Install Method

  • fixed issue #716: dpkg -P doesn't remove everything

  • fixed bugs in description of HTTP API _api/index

  • fixed issue #732: Rest API GET revision number

  • added missing documentation for several methods in HTTP API /_api/edge/...

  • fixed typos in description of HTTP API _api/document

  • defer evaluation of AQL subqueries and logical operators (lazy evaluation)

  • Updated font in WebFrontend, it now contains a version that renders properly on Windows

  • generally allow function return values as call parameters to AQL functions

  • fixed potential deadlock in global context method execution

  • added override file "arangod.conf.local" (and co)

v1.4.4 (2013-12-24)

  • uid and gid are now set in the scripts, there is no longer a separate config file for arangod when started from a script

  • foxx-manager is now an alias for arangosh

  • arango-dfdb is now an alias for arangod, moved from bin to sbin

  • changed from readline to linenoise for Windows

  • added --install-service and --uninstall-service for Windows

  • removed --daemon and --supervisor for Windows

  • arangosh and arangod now uses the config-file which maps the binary name, i. e. if you rename arangosh to foxx-manager it will use the config file foxx-manager.conf

  • fixed lock file for Windows

  • fixed issue #711, #687: foxx-manager throws internal errors

  • added --server.ssl-protocol option for client tools this allows connecting from arangosh, arangoimp, arangoimp etc. to an ArangoDB server that uses a non-default value for --server.ssl-protocol. The default value for the SSL protocol is 4 (TLSv1). If the server is configured to use a different protocol, it was not possible to connect to it with the client tools.

  • added more detailed request statistics

    This adds the number of async-executed HTTP requests plus the number of HTTP requests per individual HTTP method type.

  • added --force option for arangorestore this option allows continuing a restore operation even if the server reports errors in the middle of the restore operation

  • better error reporting for arangorestore in case the server returned an HTTP error, arangorestore previously reported this error as internal error without any details only. Now server-side errors are reported by arangorestore with the server's error message

  • include more system collections in dumps produced by arangodump previously some system collections were intentionally excluded from dumps, even if the dump was run with --include-system-collections. for example, the collections _aal, _modules, _routing, and _users were excluded. This makes sense in a replication context but not always in a dump context. When specifying --include-system-collections, arangodump will now include the above- mentioned collections in the dump, too. Some other system collections are still excluded even when the dump is run with --include-system-collections, for example _replication and _trx.

  • fixed issue #701: ArangoStatement undefined in arangosh

  • fixed typos in configuration files

v1.4.3 (2013-11-25)

  • fixed a segfault in the AQL optimizer, occurring when a constant non-list value was used on the right-hand side of an IN operator that had a collection attribute on the left-hand side

  • issue #662:

    Fixed access violation errors (crashes) in the Windows version, occurring under some circumstances when accessing databases with multiple clients in parallel

  • fixed issue #681: Problem with ArchLinux PKGBUILD configuration

v1.4.2 (2013-11-20)

  • fixed issue #669: Tiny documentation update

  • ported Windows version to use native Windows API SRWLocks (slim read-write locks) and condition variables instead of homemade versions

    MSDN states the following about the compatibility of SRWLocks and Condition Variables:

    Minimum supported client:
    Windows Server 2008 [desktop apps | Windows Store apps]
    
    Minimum supported server:
    Windows Vista [desktop apps | Windows Store apps]
    
  • fixed issue #662: ArangoDB on Windows hanging

    This fixes a deadlock issue that occurred on Windows when documents were written to a collection at the same time when some other thread tried to drop the collection.

  • fixed file-based logging in Windows

    the logger complained on startup if the specified log file already existed

  • fixed startup of server in daemon mode (--daemon startup option)

  • fixed a segfault in the AQL optimizer

  • issue #671: Method graph.measurement does not exist

  • changed Windows condition variable implementation to use Windows native condition variables

    This is an attempt to fix spurious Windows hangs as described in issue #662.

  • added documentation for JavaScript traversals

  • added --code-page command-line option for Windows version of arangosh

  • fixed a problem when creating edges via the web interface.

    The problem only occurred if a collection was created with type "document collection" via the web interface, and afterwards was dropped and re-created with type "edge collection". If the web interface page was not reloaded, the old collection type (document) was cached, making the subsequent creation of edges into the (seeming-to-be-document) collection fail.

    The fix is to not cache the collection type in the web interface. Users of an older version of the web interface can reload the collections page if they are affected.

  • fixed a caching problem in arangosh: if a collection was created using the web interface, and then removed via arangosh, arangosh did not actually drop the collection due to caching.

    Because the drop operation was not carried out, this caused misleading error messages when trying to re-create the collection (e.g. cannot create collection: duplicate name).

  • fixed ALT-introduced characters for arangosh console input on Windows

    The Windows readline port was not able to handle characters that are built using CTRL or ALT keys. Regular characters entered using the CTRL or ALT keys were silently swallowed and not passed to the terminal input handler.

    This did not seem to cause problems for the US keyboard layout, but was a severe issue for keyboard layouts that require the ALT (or ALT-GR) key to construct characters. For example, entering the character { with a German keyboard layout requires pressing ALT-GR + 9.

  • fixed issue #665: Hash/skiplist combo madness bit my ass

    this fixes a problem with missing/non-deterministic rollbacks of inserts in case of a unique constraint violation into a collection with multiple secondary indexes (with at least one of them unique)

  • fixed issue #664: ArangoDB installer on Windows requires drive c:

  • partly fixed issue #662: ArangoDB on Windows hanging

    This fixes dropping databases on Windows. In previous 1.4 versions on Windows, one shape collection file was not unloaded and removed when dropping a database, leaving one directory and one shape collection file in the otherwise-dropped database directory.

  • fixed issue #660: updated documentation on indexes

v1.4.1 (2013-11-08)

  • performance improvements for skip-list deletes

v1.4.1-rc1 (2013-11-07)

  • fixed issue #635: Web-Interface should have a "Databases" Menu for Management

  • fixed issue #624: Web-Interface is missing a Database selector

  • fixed segfault in bitarray query

  • fixed issue #656: Cannot create unique index through web interface

  • fixed issue #654: bitarray index makes server down

  • fixed issue #653: Slow query

  • fixed issue #650: Randomness of any() should be improved

  • made AQL DOCUMENT() function polymorphic and work with just one parameter.

    This allows using the DOCUMENT function like this:

    DOCUMENT('users/john')
    DOCUMENT([ 'users/john', 'users/amy' ])
    

    in addition to the existing use cases:

    DOCUMENT(users, 'users/john')
    DOCUMENT(users, 'john')
    DOCUMENT(users, [ 'users/john' ])
    DOCUMENT(users, [ 'users/john', 'users/amy' ])
    DOCUMENT(users, [ 'john', 'amy' ])
    
  • simplified usage of ArangoDB batch API

    It is not necessary anymore to send the batch boundary in the HTTP Content-Type header. Previously, the batch API expected the client to send a Content-Type header ofmultipart/form-data; boundary=<some boundary value>. This is still supported in ArangoDB 2.0, but clients can now also omit this header. If the header is not present in a client request, ArangoDB will ignore the request content type and read the MIME boundary from the beginning of the request body.

    This also allows using the batch API with the Swagger "Try it out" feature (which is not too good at sending a different or even dynamic content-type request header).

  • added API method GET /_api/database/user

    This returns the list of databases a specific user can see without changing the username/passwd.

  • issue #424: Documentation about IDs needs to be upgraded

v1.4.0 (2013-10-29)

  • fixed issue #648: /batch API is missing from Web Interface API Documentation (Swagger)

  • fixed issue #647: Icon tooltips missing

  • fixed issue #646: index creation in web interface

  • fixed issue #645: Allow jumping from edge to linked vertices

  • merged PR for issue #643: Some minor corrections and a link to "Downloads"

  • fixed issue #642: Completion of error handling

  • fixed issue #639: compiling v1.4 on maverick produces warnings on -Wstrict-null-sentinel

  • fixed issue #634: Web interface bug: Escape does not always propagate

  • fixed issue #620: added startup option --server.default-api-compatibility

    This adds the following changes to the ArangoDB server and clients:

    • the server provides a new startup option --server.default-api-compatibility. This option can be used to determine the compatibility of (some) server API return values. The value for this parameter is a server version number, calculated as follows: 10000 * major + 100 * minor (e.g. 10400 for ArangoDB 1.3). The default value is 10400 (1.4), the minimum allowed value is 10300 (1.3).

      When setting this option to a value lower than the current server version, the server might respond with old-style results to "old" clients, increasing compatibility with "old" (non-up-to-date) clients.

    • the server will on each incoming request check for an HTTP header x-arango-version. Clients can optionally set this header to the API version number they support. For example, if a client sends the HTTP header x-arango-version: 10300, the server will pick this up and might send ArangoDB 1.3-style responses in some situations.

      Setting either the startup parameter or using the HTTP header (or both) allows running "old" clients with newer versions of ArangoDB, without having to adjust the clients too much.

    • the location headers returned by the server for the APIs /_api/document/... and /_api/collection/... will have different values depending on the used API version. If the API compatibility is 10300, the location headers returned will look like this:

      location: /_api/document/....
      

      whereas when an API compatibility of 10400 or higher is used, the location headers will look like this:

      location: /_db/<database name>/_api/document/...
      

    Please note that even in the presence of this, old API versions still may not be supported forever by the server.

  • fixed issue #643: Some minor corrections and a link to "Downloads" by @frankmayer

  • started issue #642: Completion of error handling

  • fixed issue #639: compiling v1.4 on maverick produces warnings on -Wstrict-null-sentinel

  • fixed issue #621: Standard Config needs to be fixed

  • added function to manage indexes (web interface)

  • improved server shutdown time by signaling shutdown to applicationserver, logging, cleanup and compactor threads

  • added foxx-manager replace command

  • added foxx-manager installed command (a more intuitive alias for list)

  • fixed issue #617: Swagger API is missing '/_api/version'

  • fixed issue #615: Swagger API: Some commands have no parameter entry forms

  • fixed issue #614: API : Typo in : Request URL /_api/database/current

  • fixed issue #609: Graph viz tool - different background color

  • fixed issue #608: arangosh config files - eventually missing in the manual

  • fixed issue #607: Admin interface: no core documentation

  • fixed issue #603: Aardvark Foxx App Manager

  • fixed a bug in type-mapping between AQL user functions and the AQL layer

    The bug caused errors like the following when working with collection documents in an AQL user function:

    TypeError: Cannot assign to read only property '_id' of #<ShapedJson>
    
  • create less system collections when creating a new database

    This is achieved by deferring collection creation until the collections are actually needed by ArangoDB. The following collections are affected by the change:

    • _fishbowl
    • _structures

v1.4.0-beta2 (2013-10-14)

  • fixed compaction on Windows

    The compaction on Windows did not ftruncate the cleaned datafiles to a smaller size. This has been fixed so not only the content of the files is cleaned but also files are re-created with potentially smaller sizes.

  • only the following system collections will be excluded from replication from now on:

    • _replication
    • _trx
    • _users
    • _aal
    • _fishbowl
    • _modules
    • _routing

    Especially the following system collections will now be included in replication:

    • _aqlfunctions
    • _graphs

    In previous versions of ArangoDB, all system collections were excluded from the replication.

    The change also caused a change in the replication logger and applier: in previous versions of ArangoDB, only a collection's id was logged for an operation. This has not caused problems for non-system collections but for system collections there ids might differ. In addition to a collection id ArangoDB will now also log the name of a collection for each replication event.

    The replication applier will now look for the collection name attribute in logged events preferably.

  • added database selection to arango-dfdb

  • provide foxx-manager, arangodump, and arangorestore in Windows build

  • ArangoDB 1.4 will refuse to start if option --javascript.app-path is not set.

  • added startup option --server.allow-method-override

    This option can be set to allow overriding the HTTP request method in a request using one of the following custom headers:

    • x-http-method-override
    • x-http-method
    • x-method-override

    This allows bypassing proxies and tools that would otherwise just let certain types of requests pass. Enabling this option may impose a security risk, so it should only be used in very controlled environments.

    The default value for this option is false (no method overriding allowed).

  • added "details" URL parameter for bulk import API

    Setting the details URL parameter to true in a call to POST /_api/import will make the import return details about non-imported documents in the details attribute. If details is false or omitted, no details attribute will be present in the response. This is the same behavior that previous ArangoDB versions exposed.

  • added "complete" option for bulk import API

    Setting the complete URL parameter to true in a call to POST /_api/import will make the import completely fail if at least one of documents cannot be imported successfully.

    It defaults to false, which will make ArangoDB continue importing the other documents from the import even if some documents cannot be imported. This is the same behavior that previous ArangoDB versions exposed.

  • added missing swagger documentation for /_api/log

  • calling /_api/logs (or /_admin/logs) is only permitted from the _system database now.

    Calling this API method for/from other database will result in an HTTP 400.

' ported fix from https://github.com/novus/nvd3/commit/0894152def263b8dee60192f75f66700cea532cc

This prevents JavaScript errors from occurring in Chrome when in the admin interface, section "Dashboard".

  • show current database name in web interface (bottom right corner)

  • added missing documentation for /_api/import in swagger API docs

  • allow specification of database name for replication sync command replication applier

    This allows syncing from a master database with a different name than the slave database.

  • issue #601: Show DB in prompt

    arangosh now displays the database name as part of the prompt by default.

    Can change the prompt by using the --prompt option, e.g.

    > arangosh --prompt "my db is named \"%d\"> "
    

v1.4.0-beta1 (2013-10-01)

  • make the Foxx manager use per-database app directories

    Each database now has its own subdirectory for Foxx applications. Each database can thus use different Foxx applications if required. A Foxx app for a specific database resides in <app-path>/databases/<database-name>/<app-name>.

    System apps are shared between all databases. They reside in <app-path>/system/<app-name>.

  • only trigger an engine reset in development mode for URLs starting with /dev/

    This prevents ArangoDB from reloading all Foxx applications when it is not actually necessary.

  • changed error code from 10 (bad parameter) to 1232 (invalid key generator) for errors that are due to an invalid key generator specification when creating a new collection

  • automatic detection of content-type / mime-type for Foxx assets based on filenames, added possibility to override auto detection

  • added endpoint management API at /_api/endpoint

  • changed HTTP return code of PUT /_api/cursor from 400 to 404 in case a non-existing cursor is referred to

  • issue #360: added support for asynchronous requests

    Incoming HTTP requests with the headers x-arango-async: true or x-arango-async: store will be answered by the server instantly with a generic HTTP 202 (Accepted) response.

    The actual requests will be queued and processed by the server asynchronously, allowing the client to continue sending other requests without waiting for the server to process the actually requested operation.

    The exact point in time when a queued request is executed is undefined. If an error occurs during execution of an asynchronous request, the client will not be notified by the server.

    The maximum size of the asynchronous task queue can be controlled using the new option --scheduler.maximal-queue-size. If the queue contains this many number of tasks and a new asynchronous request comes in, the server will reject it with an HTTP 500 (internal server error) response.

    Results of incoming requests marked with header x-arango-async: true will be discarded by the server immediately. Clients have no way of accessing the result of such asynchronously executed request. This is just fire and forget.

    To later retrieve the result of an asynchronously executed request, clients can mark a request with the header x-arango-async: keep. This makes the server store the result of the request in memory until explicitly fetched by a client via the /_api/job API. The /_api/job API also provides methods for basic inspection of which pending or already finished requests there are on the server, plus ways for garbage collecting unneeded results.

  • Added new option --scheduler.maximal-queue-size.

  • issue #590: Manifest Lint

  • added data dump and restore tools, arangodump and arangorestore.

    arangodump can be used to create a logical dump of an ArangoDB database, or just dedicated collections. It can be used to dump both a collection's structure (properties and indexes) and data (documents).

    arangorestore can be used to restore data from a dump created with arangodump. arangorestore currently does not re-create any indexes, and doesn't yet handle referenced documents in edges properly when doing just partial restores. This will be fixed until 1.4 stable.

  • introduced --server.database option for arangosh, arangoimp, and arangob.

    The option allows these client tools to use a certain database for their actions. In arangosh, the current database can be switched at any time using the command

    db._useDatabase(<name>);
    

    When no database is specified, all client tools will assume they should use the default database _system. This is done for downwards-compatibility reasons.

  • added basic multi database support (alpha)

    New databases can be created using the REST API POST /_api/database and the shell command db._createDatabase(<name>).

    The default database in ArangoDB is called _system. This database is always present and cannot be deleted by the user. When an older version of ArangoDB is upgraded to 1.4, the previously only database will automatically become the _system database.

    New databases can be created with the above commands, and can be deleted with the REST API DELETE /_api/database/<name> or the shell command db._dropDatabase(<name>);.

    Deleting databases is still unstable in ArangoDB 1.4 alpha and might crash the server. This will be fixed until 1.4 stable.

    To access a specific database via the HTTP REST API, the /_db/<name>/ prefix can be used in all URLs. ArangoDB will check if an incoming request starts with this prefix, and will automatically pick the database name from it. If the prefix is not there, ArangoDB will assume the request is made for the default database (_system). This is done for downwards-compatibility reasons.

    That means, the following URL pathnames are logically identical:

    /_api/document/mycollection/1234
    /_db/_system/document/mycollection/1234
    

    To access a different database (e.g. test), the URL pathname would look like this:

    /_db/test/document/mycollection/1234
    

    New databases can also be created and existing databases can only be dropped from within the default database (_system). It is not possible to drop the _system database itself.

    Cross-database operations are unintended and unsupported. The intention of the multi-database feature is to have the possibility to have a few databases managed by ArangoDB in parallel, but to only access one database at a time from a connection or a request.

    When accessing the web interface via the URL pathname /_admin/html/ or /_admin/aardvark, the web interface for the default database (_system) will be displayed. To access the web interface for a different database, the database name can be put into the URLs as a prefix, e.g. /_db/test/_admin/html or /_db/test/_admin/aardvark.

    All internal request handlers and also all user-defined request handlers and actions (including Foxx) will only get to see the unprefixed URL pathnames (i.e. excluding any database name prefix). This is to ensure downwards-compatibility.

    To access the name of the requested database from any action (including Foxx), use use req.database.

    For example, when calling the URL /myapp/myaction, the content of req.database will be _system (the default database because no database got specified) and the content of req.url will be /myapp/myaction.

    When calling the URL /_db/test/myapp/myaction, the content of req.database will be test, and the content of req.url will still be /myapp/myaction.

  • Foxx now excludes files starting with . (dot) when bundling assets

    This mitigates problems with editor swap files etc.

  • made the web interface a Foxx application

    This change caused the files for the web interface to be moved from html/admin to js/apps/aardvark in the file system.

    The base URL for the admin interface changed from _admin/html/index.html to _admin/aardvark/index.html.

    The "old" redirection to _admin/html/index.html will now produce a 404 error.

    When starting ArangoDB with the --upgrade option, this will automatically be remedied by putting in a redirection from / to /_admin/aardvark/index.html, and from /_admin/html/index.html to /_admin/aardvark/index.html.

    This also obsoletes the following configuration (command-line) options:

    • --server.admin-directory
    • --server.disable-admin-interface

    when using these now obsolete options when the server is started, no error is produced for downwards-compatibility.

  • changed User-Agent value sent by arangoimp, arangosh, and arangod from "VOC-Agent" to "ArangoDB"

  • changed journal file creation behavior as follows:

    Previously, a journal file for a collection was always created when a collection was created. When a journal filled up and became full, the current journal was made a datafile, and a new (empty) journal was created automatically. There weren't many intended situations when a collection did not have at least one journal.

    This is changed now as follows:

    • when a collection is created, no journal file will be created automatically
    • when there is a write into a collection without a journal, the journal will be created lazily
    • when there is a write into a collection with a full journal, a new journal will be created automatically

    From the end user perspective, nothing should have changed, except that there is now less disk usage for empty collections. Disk usage of infrequently updated collections might also be reduced significantly by running the rotate() method of a collection, and not writing into a collection subsequently.

  • added method collection.rotate()

    This allows premature rotation of a collection's current journal file into a (read-only) datafile. The purpose of using rotate() is to prematurely allow compaction (which is performed on datafiles only) on data, even if the journal was not filled up completely.

    Using rotate() may make sense in the following scenario:

    c = db._create("test");
    for (i = 0; i < 1000; ++i) {
      c.save(...); // insert lots of data here
    }
    
    ...
    c.truncate(); // collection is now empty
    // only data in datafiles will be compacted by following compaction runs
    // all data in the current journal would not be compacted
    
    // calling rotate will make the current journal a datafile, and thus make it
    // eligible for compaction
    c.rotate();
    

    Using rotate() may also be useful when data in a collection is known to not change in the immediate future. After having completed all write operations on a collection, performing a rotate() will reduce the size of the current journal to the actually required size (remember that journals are pre-allocated with a specific size) before making the journal a datafile. Thus rotate() may cause disk space savings, even if the datafiles does not qualify for compaction after rotation.

    Note: rotating the journal is asynchronous, so that the actual rotation may be executed after rotate() returns to the caller.

  • changed compaction to merge small datafiles together (up to 3 datafiles are merged in a compaction run)

    In the regular case, this should leave less small datafiles stay around on disk and allow using less file descriptors in total.

  • added AQL MINUS function

  • added AQL UNION_DISTINCT function (more efficient than combination of UNIQUE(UNION()))

  • updated mruby to 2013-08-22

  • issue #587: Add db._create() in help for startup arangosh

  • issue #586: Share a link on installation instructions in the User Manual

  • issue #585: Bison 2.4 missing on Mac for custom build

  • issue #584: Web interface images broken in devel

  • issue #583: Small documentation update

  • issue #581: Parameter binding for attributes

  • issue #580: Small improvements (by @guidoreina)

  • issue #577: Missing documentation for collection figures in implementor manual

  • issue #576: Get disk usage for collections and graphs

    This extends the result of the REST API for /_api/collection/figures with the attributes compactors.count, compactors.fileSize, shapefiles.count, and shapefiles.fileSize.

  • issue #575: installing devel version on mac (low prio)

  • issue #574: Documentation (POST /_admin/routing/reload)

  • issue #558: HTTP cursors, allow count to ignore LIMIT

v1.4.0-alpha1 (2013-08-02)

  • added replication. check online manual for details.

  • added server startup options --server.disable-replication-logger and --server.disable-replication-applier

  • removed action deployment tool, this now handled with Foxx and its manager or by kaerus node utility

  • fixed a server crash when using byExample / firstExample inside a transaction and the collection contained a usable hash/skiplist index for the example

  • defineHttp now only expects a single context

  • added collection detail dialog (web interface)

    Shows collection properties, figures (datafiles, journals, attributes, etc.) and indexes.

  • added documents filter (web interface)

    Allows searching for documents based on attribute values. One or many filter conditions can be defined, using comparison operators such as '==', '<=', etc.

  • improved AQL editor (web interface)

    Editor supports keyboard shortcuts (Submit, Undo, Redo, Select). Editor allows saving and reusing of user-defined queries. Added example queries to AQL editor. Added comment button.

  • added document import (web interface)

    Allows upload of JSON-data from files. Files must have an extension of .json.

  • added dashboard (web interface)

    Shows the status of replication and multiple system charts, e.g. Virtual Memory Size, Request Time, HTTP Connections etc.

  • added API method /_api/graph to query all graphs with all properties.

  • added example queries in web interface AQL editor

  • added arango.reconnect() method for arangosh to dynamically switch server or user name

  • added AQL range operator ..

    The .. operator can be used to easily iterate over a sequence of numeric values. It will produce a list of values in the defined range, with both bounding values included.

    Example:

    2010..2013
    

    will produce the following result:

    [ 2010, 2011, 2012, 2013 ]
    
  • added AQL RANGE function

  • added collection.first(count) and collection.last(count) document access functions

    These functions allow accessing the first or last n documents in a collection. The order is determined by document insertion/update time.

  • added AQL INTERSECTION function

  • INCOMPATIBLE CHANGE: changed AQL user function namespace resolution operator from : to ::

    AQL user-defined functions were introduced in ArangoDB 1.3, and the namespace resolution operator for them was the single colon (:). A function call looked like this:

    RETURN mygroup:myfunc()
    

    The single colon caused an ambiguity in the AQL grammar, making it indistinguishable from named attributes or the ternary operator in some cases, e.g.

    { mygroup:myfunc ? mygroup:myfunc }
    

    The change of the namespace resolution operator from : to :: fixes this ambiguity.

    Existing user functions in the database will be automatically fixed when starting ArangoDB 1.4 with the --upgrade option. However, queries using user-defined functions need to be adjusted on the client side to use the new operator.

  • allow multiple AQL LET declarations separated by comma, e.g. LET a = 1, b = 2, c = 3

  • more useful AQL error messages

    The error position (line/column) is more clearly indicated for parse errors. Additionally, if a query references a collection that cannot be found, the error message will give a hint on the collection name

  • changed return value for AQL DOCUMENT function in case document is not found

    Previously, when the AQL DOCUMENT function was called with the id of a document and the document could not be found, it returned undefined. This value is not part of the JSON type system and this has caused some problems. Starting with ArangoDB 1.4, the DOCUMENT function will return null if the document looked for cannot be found.

    In case the function is called with a list of documents, it will continue to return all found documents, and will not return null for non-found documents. This has not changed.

  • added single line comments for AQL

    Single line comments can be started with a double forward slash: //. They end at the end of the line, or the end of the query string, whichever is first.

  • fixed documentation issues #567, #568, #571.

  • added collection.checksum() method to calculate CRC checksums for collections

    This can be used to

    • check if data in a collection has changed
    • compare the contents of two collections on different ArangoDB instances
  • issue #565: add description line to aal.listAvailable()

  • fixed several out-of-memory situations when double freeing or invalid memory accesses could happen

  • less msyncing during the creation of collections

    This is achieved by not syncing the initial (standard) markers in shapes collections. After all standard markers are written, the shapes collection will get synced.

  • renamed command-line option --log.filter to --log.source-filter to avoid misunderstandings

  • introduced new command-line option --log.content-filter to optionally restrict logging to just specific log messages (containing the filter string, case-sensitive).

    For example, to filter on just log entries which contain ArangoDB, use:

    --log.content-filter "ArangoDB"
    
  • added optional command-line option --log.requests-file to log incoming HTTP requests to a file.

    When used, all HTTP requests will be logged to the specified file, containing the client IP address, HTTP method, requests URL, HTTP response code, and size of the response body.

  • added a signal handler for SIGUSR1 signal:

    when ArangoDB receives this signal, it will respond all further incoming requests with an HTTP 503 (Service Unavailable) error. This will be the case until another SIGUSR1 signal is caught. This will make ArangoDB start serving requests regularly again. Note: this is not implemented on Windows.

  • limited maximum request URI length to 16384 bytes:

    Incoming requests with longer request URIs will be responded to with an HTTP 414 (Request-URI Too Long) error.

  • require version 1.0 or 1.1 in HTTP version signature of requests sent by clients:

    Clients sending requests with a non-HTTP 1.0 or non-HTTP 1.1 version number will be served with an HTTP 505 (HTTP Version Not Supported) error.

  • updated manual on indexes:

    using system attributes such as _id, _key, _from, _to, _rev in indexes is disallowed and will be rejected by the server. This was the case since ArangoDB 1.3, but was not properly documented.

  • issue #563: can aal become a default object?

    aal is now a prefab object in arangosh

  • prevent certain system collections from being renamed, dropped, or even unloaded.

    Which restrictions there are for which system collections may vary from release to release, but users should in general not try to modify system collections directly anyway.

    Note: there are no such restrictions for user-created collections.

  • issue #559: added Foxx documentation to user manual

  • added server startup option --server.authenticate-system-only. This option can be used to restrict the need for HTTP authentication to internal functionality and APIs, such as /_api/* and /_admin/*. Setting this option to true will thus force authentication for the ArangoDB APIs and the web interface, but allow unauthenticated requests for other URLs (including user defined actions and Foxx applications). The default value of this option is false, meaning that if authentication is turned on, authentication is still required for all incoming requests. Only by setting the option to true this restriction is lifted and authentication becomes required for URLs starting with /_ only.

    Please note that authentication still needs to be enabled regularly by setting the --server.disable-authentication parameter to false. Otherwise no authentication will be required for any URLs as before.

  • protect collections against unloading when there are still document barriers around.

  • extended cap constraints to optionally limit the active data size in a collection to a specific number of bytes.

    The arguments for creating a cap constraint are now: collection.ensureCapConstraint(<count>, <byteSize>);

    It is supported to specify just a count as in ArangoDB 1.3 and before, to specify just a fileSize, or both. The first met constraint will trigger the automated document removal.

  • added db._exists(doc) and collection.exists(doc) for easy document existence checks

  • added API /_api/current-database to retrieve information about the database the client is currently connected to (note: the API /_api/current-database has been removed in the meantime. The functionality is accessible via /_api/database/current now).

  • ensure a proper order of tick values in datafiles/journals/compactors. any new files written will have the _tick values of their markers in order. for older files, there are edge cases at the beginning and end of the datafiles when _tick values are not properly in order.

  • prevent caching of static pages in PathHandler. whenever a static page is requested that is served by the general PathHandler, the server will respond to HTTP GET requests with a "Cache-Control: max-age=86400" header.

  • added "doCompact" attribute when creating collections and to collection.properties(). The attribute controls whether collection datafiles are compacted.

  • changed the HTTP return code from 400 to 404 for some cases when there is a referral to a non-existing collection or document.

  • introduced error code 1909 too many iterations that is thrown when graph traversals hit the maxIterations threshold.

  • optionally limit traversals to a certain number of iterations the limitation can be achieved via the traversal API by setting the maxIterations attribute, and also via the AQL TRAVERSAL and TRAVERSAL_TREE functions by setting the same attribute. If traversals are not limited by the end user, a server-defined limit for maxIterations may be used to prevent server-side traversals from running endlessly.

  • added graph traversal API at /_api/traversal

  • added "API" link in web interface, pointing to REST API generated with Swagger

  • moved "About" link in web interface into "links" menu

  • allow incremental access to the documents in a collection from out of AQL this allows reading documents from a collection chunks when a full collection scan is required. memory usage might be must lower in this case and queries might finish earlier if there is an additional LIMIT statement

  • changed AQL COLLECT to use a stable sort, so any previous SORT order is preserved

  • issue #547: Javascript error in the web interface

  • issue #550: Make AQL graph functions support key in addition to id

  • issue #526: Unable to escape when an errorneous command is entered into the js shell

  • issue #523: Graph and vertex methods for the javascript api

  • issue #517: Foxx: Route parameters with capital letters fail

  • issue #512: Binded Parameters for LIMIT

v1.3.3 (2013-08-01)

  • issue #570: updateFishbowl() fails once

  • updated and fixed generated examples

  • issue #559: added Foxx documentation to user manual

  • added missing error reporting for errors that happened during import of edges

v1.3.2 (2013-06-21)

  • fixed memleak in internal.download()

  • made the shape-collection journal size adaptive: if too big shapes come in, a shape journal will be created with a big-enough size automatically. the maximum size of a shape journal is still restricted, but to a very big value that should never be reached in practice.

  • fixed a segfault that occurred when inserting documents with a shape size bigger than the default shape journal size (2MB)

  • fixed a locking issue in collection.truncate()

  • fixed value overflow in accumulated filesizes reported by collection.figures()

  • issue #545: AQL FILTER unnecessary (?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment