Steps | Presentation |
---|---|
1. Download the zip file of Apache Jena Fuseki on the website Apache Jena | |
2. After unpacking, the server can be executed for the first time by running (double-clicking) the "fuseki-server.bat". | |
3. The server offers a web interface that can be reached in the browser under localhost:3030. Here, so-called datasets, i.e. independent ontologies (e.g. for different projects) can be created or existing datasets can be edited. Files can also be uploaded. |
- SPARQL queries and updates can also be sent directly from the web interface.
- The server can be stopped at any time by typing "Ctrl+Shift+c" with confirmation "j" in the shell.
- The "config.ttl" can be edited with an editor such as Notepad++. An example file is shown below. A data set with the name Example is already created.
# Licensed under the terms of http://www.apache.org/licenses/LICENSE-2.0
## Fuseki Server configuration file.
@prefix : <#> .
@prefix fuseki: <http://jena.apache.org/fuseki#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
@prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
[] rdf:type fuseki:Server ;
fuseki:services (
<#service1>
) .
# Custom code.
[] ja:loadClass "com.hp.hpl.jena.tdb.TDB" .
# TDB
tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
tdb:GraphTDB rdfs:subClassOf ja:Model .
<#service1> rdf:type fuseki:Service ;
fuseki:name "Beispiel" ; # http://host/Beispiel
fuseki:serviceQuery "query" ; # SPARQL query service (alt name)
fuseki:serviceQuery "sparql" ;
fuseki:serviceUpdate "update" ;
fuseki:serviceUpload "upload" ; # Non-SPARQL upload service
fuseki:serviceReadWriteGraphStore "data" ; # SPARQL Graph store protocol (read and write)
# A separate read-only graph store endpoint:
fuseki:serviceReadGraphStore "get" ; # SPARQL Graph store protocol (read only)
fuseki:dataset <#dataset> .
<#dataset> rdf:type ja:RDFDataset ;
ja:defaultGraph <#model_inf> .
<#model_inf> a ja:InfModel ;
ja:baseModel <#tdbGraph> ;
ja:reasoner [
ja:reasonerClass "openllet.jena.PelletReasonerFactory" ;
] .
<#tdbGraph> rdf:type tdb:GraphTDB ;
tdb:dataset <#tdbDataset> .
<#tdbDataset> rdf:type tdb:DatasetTDB ;
tdb:location "TDB" ;
#set the timeout for a SPARQL query in milliseconds. 0 means no timeout and the query never times out.
ja:context [ ja:cxtName "arq:queryTimeout" ; ja:cxtValue "0" ] .
The first part of the configuration consists of some useful prefix declarations.
Each data service consists of a base name, operations and their endpoints, and a data set for the RDF data.
This example provides SPARQL query, SPARQL update, file upload and SPARQL graph store protocol. The operations are represented by "fuseki:serviceQuery" and the endpoints by e.g. "sparql", among others.
An inference reasoner can be built on top of a previously defined dataset. For this, the dataset, inference model and graph must be built on top of each other. Here the reasoner "openllet" is added.
The config.ttl only describes a desired configuration! In order for this to be executable, the code of the required "additional functions" (e.g. Reasoner) must also be added.
Steps | Presentation |
---|---|
6. To do this, missing classes are added using the corresponding jar files. The following jars are needed in any case and can be found at Maven. | |
6.1 The openllet.jar is necessary to include the above mentioned Reasoner. | original-openllet-distribution: → bundled jar, download on Openllet-Github-Releases |
6.2 The results of a query can be displayed if jaxb-api.jar is present | jaxb-api: groupId: javax.xml.bind artifactId: jaxb-api-parent version: 2.3.1 |
6.3 jgrapht-core.jar is needed for backups at least in the configuration shown above | jgrapht-core: groupId: org.jgrapht artifactId: jgrapht-core version: 1.2.0 |
There may be incompatibilities between the jars. For example, at the time of writing this guide, the following incompatibility was known: openllet.jar in version 2.6.5 can only be used with jgrapht-core.jar in version 1.2.0 or smaller (e.g. 1.1.0).
- Inside the Fuseki folder, a folder "lib" is created for this purpose.
- If you use the file "fuseki-server" to run the server, Fuseki will automatically look for a folder called "extra" inside the FUSEKI_BASE. FUSEKI_BASE is typically = folder "run" inside the fuseki folder
- In order for the jar files to be included within the lib folder of Fuseki, a reference to the lib folder must be made within the "fuseki-server.bat". Below is shown an example of a batch file where the inclusion of the lib folder is done in the last line ("; lib/*").
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing, software
@REM distributed under the License is distributed on an "AS IS" BASIS,
@REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@REM See the License for the specific language governing permissions and
@REM limitations under the License.
@echo off
@REM modify this to name the server jar
@REM java -Xmx1200M -jar fuseki-server.jar %*
@REM Adding custom code to the Fuseki server:
@REM
@REM It is also possible to launch Fuseki using
@REM java ..jvmarsg... -cp $JAR org.apache.jena.fuseki.cmd.FusekiCmd %*
@REM
@REM In this way, you can add custom java to the classpath:
@REM
java -Xmx1200M -cp fuseki-server.jar;lib/* org.apache.jena.fuseki.cmd.FusekiCmd %*
- Data can be added to the data set via upload-files. This data must be in an RDF format such as Turtle or RDF/XML. In addition to data, SWRL rules can also be added as long as they are also in an RDF format. For example, SWRL rules can be created in Protege using the SWRL tab and then saved as a Turtle file. Below is such a rule.
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix Test: <http://www.semanticweb.org/student/ontologies/2021/2/Test#> .
Test:s rdf:type <http://www.w3.org/2003/11/swrl#Variable> .
Test:age rdf:type <http://www.w3.org/2003/11/swrl#Variable> .
[ <http://swrl.stanford.edu/ontologies/3.3/swrla.owl#isRuleEnabled> "true"^^xsd:boolean ;
rdfs:comment ""^^xsd:string ;
rdfs:label "S1"^^xsd:string ;
rdf:type <http://www.w3.org/2003/11/swrl#Imp> ;
<http://www.w3.org/2003/11/swrl#body> [ rdf:type <http://www.w3.org/2003/11/swrl#AtomList> ;
rdf:first [ rdf:type <http://www.w3.org/2003/11/swrl#DatavaluedPropertyAtom> ;
<http://www.w3.org/2003/11/swrl#propertyPredicate> Test:hasAge ;
<http://www.w3.org/2003/11/swrl#argument1> Test:s ;
<http://www.w3.org/2003/11/swrl#argument2> Test:age
] ;
rdf:rest [ rdf:type <http://www.w3.org/2003/11/swrl#AtomList> ;
rdf:first [ rdf:type <http://www.w3.org/2003/11/swrl#BuiltinAtom> ;
<http://www.w3.org/2003/11/swrl#builtin> <http://www.w3.org/2003/11/swrlb#greaterThan> ;
<http://www.w3.org/2003/11/swrl#arguments> [ rdf:type rdf:List ;
rdf:first Test:age ;
rdf:rest [ rdf:type rdf:List ;
rdf:first 18 ;
rdf:rest rdf:nil
]
]
] ;
rdf:rest rdf:nil
]
] ;
<http://www.w3.org/2003/11/swrl#head> [ rdf:type <http://www.w3.org/2003/11/swrl#AtomList> ;
rdf:first [ rdf:type <http://www.w3.org/2003/11/swrl#ClassAtom> ;
<http://www.w3.org/2003/11/swrl#classPredicate> Test:Adult ;
<http://www.w3.org/2003/11/swrl#argument1> Test:s
] ;
rdf:rest rdf:nil
]
] .
The rule states that individuals of the Human class are also members of the Adult class if their age is greater than 18.
All existing data or even newly added data is checked against this rule. The rules are also executed when queries are executed (e.g. query for all individuals of the adult class) and the corresponding newly inferred class memberships are taken into account.
- When executing SPARQL queries, it is still important to note that there are different endpoints for different SPARQL operations.
- Query: If it is a SPARQL query (SELECT), then the SPARQL endpoint corresponds to the data set (here: /example).
- Update: If data is to be added or deleted by means of SPARQL update (INSERT or DELETE), the SPARQL endpoint corresponds to the data set with additional path update (here: /example/update).
Important distinction:
- When using Fuseki with UI / web interface, security is done using Apache Shiro.
- When using the standalone server, security settings are made directly in config.ttl or in separate password files: Fuseki Data Access Controll.
For an extended security configuration, there is documentation with examples at Jena Doc Permissions.