... | ... | @@ -25,6 +25,7 @@ The Ontology Debugger's main task is to ***support the user*** in the process o |
|
|
- As long as there are more than one set of faulty axioms that together explain the inconsistency/incoherency, the Ontology Debugger is repeating the dialog and stating further queries to the user.
|
|
|
- Once there is only one set of faulty axioms that explain the incoherency/inconsistency the interaction is finished and no more questions are generated.
|
|
|
- Note that once a set of faulty axioms (or diagnosis) is found this means that ***every axiom*** in this set is responsible for the inconsistency/incoherency in the ontology.
|
|
|
- **Remark**: The user of the Ontology Debugger is ***not*** required to analyze (1) which entailments (or statements) do or do not hold or why certain entailments (or statements) do or do not hold ***in the faulty input ontology*** or (2) ***why exactly the input ontology is faulty***. The user is just assumed to answer questions about what must or must not hold ***in the intended ontology or domain model***, respectively. Given the answers, the Ontology Debugger will return what must be repaired in the faulty input ontology.
|
|
|
|
|
|
### Step 1: Load the ontology
|
|
|
To demonstrate the features of the Ontology Debugger, we will now try to find the faulty axioms in the ***consistent but incoherent*** [*Koala ontology*](http://protege.stanford.edu/ontologies/koala.owl). Select ```File->Open from URL... ``` and enter the URL http://protege.stanford.edu/ontologies/koala.owl or simply select the URL from the Bookmarks.
|
... | ... | @@ -93,13 +94,15 @@ In our example we assume that both axioms are correct since only persons can hav |
|
|
|
|
|
![step1](/uploads/71eb8cb6fc4c95a048e9ab71492ece24/step1.PNG)
|
|
|
|
|
|
Acknowledging this decision by pressing __Submit__ list both axioms in the list of __Entailed Testcases__ in the view for Acquired Test Cases in the mid section.
|
|
|
Acknowledging this decision by pressing __Submit__ causes both axioms to be listed among the __Entailed Testcases__ in the view for Acquired Test Cases in the mid section.
|
|
|
|
|
|
![step2](/uploads/63bda321d784fc1792f86c6a03a1ed87/step2.PNG)
|
|
|
|
|
|
### Step 6 Answer all queries until the Debugger gives us a solution
|
|
|
|
|
|
Since the Ontology Debugger has not found a solution yet a new set of faulty axioms sets and queries are calculated after the user pressed the Submit-button. If we continue answering the questions we should finally find a set of faulty axioms (also known as diagnosis) corresponding to our Acquire Test Cases and Input Ontology.
|
|
|
Since the Ontology Debugger has not found a unique solution yet (i.e. there are still multiple possible sets of faulty axioms), the set of faulty axiom sets is updated based on the given query answer after the user presses the Submit-button. This update involves the deletion of faulty axiom sets inconsistent with the given answer and the generation of some new possible faulty axiom sets as a basis for the computation of the next query. If we continue answering the stated questions of the debugger we will finally end up with a unique set of faulty axioms (also known as diagnosis) corresponding to our Acquired Test Cases and Input Ontology.
|
|
|
|
|
|
**Note** that (1) performing adequate modifications to all the axioms in the obtained set of faulty axioms or (2) deleting all these axioms from the ontology is the only possible way of repairing the inconsistency / incoherency of the Input Ontology in the light of the query answers given during the debugging session. In other words, among all possible repairs of the Input Ontology, the one obtained at the end of the debugging session is the only one reflecting your desired domain model, i.e. in this case the domain model of Marsupials you believe is the correct one.
|
|
|
|
|
|
![finalstep](/uploads/aa1cd5248e7c58b7455ae330bcaafc86/finalstep.PNG)
|
|
|
|
... | ... | @@ -110,13 +113,13 @@ Our debugging session ended with a found set of 3 faulty axioms: |
|
|
and
|
|
|
- [x] ```Koala SubClassOf isHardWorking value false```.
|
|
|
|
|
|
You can reproduce the solution given by taking a look at the given answers in the **Acquired test cases ** view in the mid section in the picture above.
|
|
|
You can reproduce the provided solution by taking a look at the given answers in the **Acquired test cases** view in the mid section in the picture above.
|
|
|
|
|
|
Note that the axioms ```Quokka SubClassOf Person```as well as ```Koala DisjointWith Person``` have been **inferred** by the Ontology Debugger and have been presented in queries.
|
|
|
Note that the axioms ```Quokka SubClassOf Person```as well as ```Koala DisjointWith Person``` among the acquired test cases are axioms **inferred** by the Ontology Debugger that have been presented in queries. That is, these axioms do not occur in the Input Ontology.
|
|
|
|
|
|
# Preference settings for the Ontology Debugger Plug-In in Protégé
|
|
|
|
|
|
The above debugging session was using the default settings and the *HermiT* solver. You can change these settings in the debugger preferences in ```File->Preferences```
|
|
|
In the above debugging session the default settings and the *HermiT* reasoner were used. You can change these settings in the debugger preferences in ```File->Preferences```
|
|
|
|
|
|
### Preferences for Fault Localization
|
|
|
|
... | ... | @@ -124,17 +127,17 @@ The above debugging session was using the default settings and the *HermiT* solv |
|
|
|
|
|
##### Engine Types
|
|
|
|
|
|
**Diagnosis Engines** are responsible for finding our **Faulty Axioms** (or diagnoses). As **Diagnosis Engines** we can choose either between HS-DAG, HS-Tree (default) and Inv-QuickXPlain. The last algorithm computes the fault axioms (or diagnoses) directly, i.e. without computation of minimal conflicts, and must be used if you want to find only a few diagnoses very fast. The other two algorithms are based on computation of conflicts and are preferable if you want to use the interactive version of our debugger.
|
|
|
**Diagnosis Engines** are responsible for finding the **Faulty Axioms** sets (or diagnoses). As **Diagnosis Engines** we can choose either between HS-DAG, HS-Tree (default) and Inv-QuickXPlain. The last algorithm computes the faulty axioms (or diagnoses) directly, i.e. without the computation of minimal conflicts, and is the preferred choice if you want to find only a few diagnoses very fast. The other two algorithms are based on the calculation of conflicts and are preferable if you want to find the diagnoses in a best-first order (w.r.t. some cost estimation function, see below). In other words, Inv-QuickXPlain tends to exhibit the least wait for the user between two consecutive queries whereas the other two engines always show the best possible sets of faulty axioms to the user in the **Faulty Axioms** tab.
|
|
|
|
|
|
##### Cost Estimation
|
|
|
|
|
|
The diagnosis engine HS-DAG and HS-Tree need cost estimation functions for the generation of fault axioms (diagnoses).
|
|
|
The diagnosis engine HS-DAG and HS-Tree need cost estimation functions for the generation of faulty axiom sets (diagnoses).
|
|
|
|
|
|
The user can choose between **EqualCosts** which equalizes all diagnoses , **Cardinality** (default) which simply count the number of axioms per diagnosis, and **Syntax** which calculates the diagnoses measures according to the occurrence of keywords in the axioms.
|
|
|
The user can choose between three functions: **EqualCosts** equalizes all diagnoses (no preference), **Cardinality** (default) prefers diagnoses with lower numbers of axioms per diagnosis, and **Syntax** prefers diagnoses with higher probability where the probability of a diagnosis is calculated according to the occurrence of syntactic keywords in the axioms.
|
|
|
|
|
|
##### Diagnoses Calculation
|
|
|
|
|
|
Next we can choose how many faulty axiom sets we want to calculate at most (default is 9). Please note that the higher the number the more time is required to calculate depending on the complexity of the ontology.
|
|
|
Next you can choose how many faulty axiom sets you want the debugger to calculate at most (default is 9) before providing the next query. Please note that the higher the number, the lower the overall number of queries might be in a debugging session, but the more computation time might be required between two queries depending on the complexity of the ontology.
|
|
|
|
|
|
If you want to debug an __incoherent ontology__, the process quite often can be sped up by reducing the incoherency to inconsistency -- by adding a new individual to every unsatisfiable class. By default this option is enabled, since we normally want to be able to debug both inconsistent and incoherent ontologies.
|
|
|
|
... | ... | @@ -144,15 +147,15 @@ If you want to debug an __incoherent ontology__, the process quite often can be |
|
|
|
|
|
##### Query Computation
|
|
|
|
|
|
**Enrich query**: by default, if possible, we want to **enrich** the generated queries with axioms that are not stated explicitely in our ontology but can be followed using logical reasoning mechanism. Thus the Ontology Debugger can show the user consequences from the axioms given in the ontology and the acquired test cases.
|
|
|
**Enrich query**: By default, if possible, the generated queries are **enriched** with axioms that are not stated explicitly in the input ontology but can be deduced from axioms in the ontology using the logical reasoner. Thus the Ontology Debugger can show the user implicit consequences from the axioms given in the ontology and the acquired test cases.
|
|
|
|
|
|
##### Measurements
|
|
|
|
|
|
The **Sort Criterion** is necessary for prioritizing faulty axioms (diagnoses) in the query computation. You can choose between MinCard (default), MinSum and MinMax. Next you can define Requirements Measurements such as Entropy (default), Split in Half and RIO.
|
|
|
The **Sort Criterion** is used for filtering out a preferred query among a set of possible query candidates during query computation. You can choose between **MinCard** (default) which prefers queries involving a minimal number of axioms, **MinSum** and **MinMax**. Each of the latter criteria aims at selecting a query that might be best understood by the interacting user. Moreover, you can define Requirements Measures for query selection such as **Entropy** (default), **Split in Half** and **RIO**. These aim at minimizing the number of queries to be answered by the interacting user until obtaining the correct diagnosis.
|
|
|
|
|
|
##### Thresholds
|
|
|
|
|
|
Depending on the above selection of the Requirements Measurement the threshold values can be set (Default for Entropy: 0.5, Cardinality: 0 and Cautious 0.4).
|
|
|
Depending on the above selection of the Requirements Measure the threshold values can be set (Default for Entropy: 0.5, Cardinality: 0 and Cautious 0.4).
|
|
|
|
|
|
### Default Preferences ###
|
|
|
|
... | ... | |