Ontology Debugger Plug-In for Protégé
This plug-in is still in development!
- You may experience faulty and unexpected behaviour such as deadlocks, exceptions etc. You may report any bugs you experience in our feedback. We will try to fix them as soon as possible.
- There are still many features missing that we plan to implement (see the list of open issues). If you have ideas for features that would be nice to have we would be pleased if you send us your request.
- If you experience a faulty behaviour of your Protégé instance (for example your Protege cannot start), it may be that this plugin causes the error. You can test this in the following way: delete the file called "org.exquisite.protege-<x.y.z>.RC.jar" in the "plugins" subdirectory of your Protege installation directory and restart Protégé (the <x.y.z> represents the current version such as 0.2.1). In the same way you can also check other plugins if they are responsible for the fault. If this plugin is not causing the error you can later reinstall it.
These steps are necessary in order to run the plug-in:
- Download the latest version of Protégé Desktop from http://protege.stanford.edu/ and follow the installation instructions.
- Note that the Debugger Plug-In is not compatible with Protégé version 4 and below.
- Install the Ontology Debugger Plugin with Protégé's Update Function
File->Check for Plugins...and select Ontology Debugger
As an alternative you can also download the latest jar-file of the Ontology Debugger and copy the jar-File into the
pluginssubfolder of your Protégé 5 desktop client.
- If your Protégé client is already running, you will have to restart the client to load the plugin.
- After your Protégé client has restarted you will see the additional menu entry
Tools->Debug Ontology ...
About the Ontology Debugger Plug-In for Protégé
The Ontology Debugger's main task is to support the user in the process of finding the faulty axioms in inconsistent and/or incoherent ontologies  and/or ontologies from which unwanted inferences are generated.
- The process of finding the faulty axioms in the ontology that are responsible for the inconsistency/incoherency is done by interacting with the user.
- The interaction with the user takes place in the way of iteratively asking questions (or queries) to the user in the form of axioms - we call this interaction dialog a Debugging Session.
- Each axiom given in such a query can be read in the form of the question: Must this axiom be entailed by the desired ontology? or Is this axiom necessarily true in the domain that should be modeled by the ontology?
- The user responds with the answer YES if she thinks that this axiom must hold for the ontology or NO otherwise. It is also possible to leave an axiom unanswered.
- Note that the axioms of a query can be either explicitly stated axioms in the faulty ontology or statements inferred from axioms in the ontology and the answers given for previous queries.
- The answers given by the user are then taken into account to narrow down the set of faulty axiom sets (we also call such faulty axiom sets Possible Ontology Repairs or diagnoses).
- As long as there is more than one possible ontology repair, the Ontology Debugger is repeating the dialog and states further queries to the user.
- Once there is only one possible ontology repair left, the interaction is finished and no more questions are generated.
- Note that once a single possible ontology repair (or diagnosis) remains, this means that every axiom in this set is faulty, and all axioms together are responsible for all problems (inconsistency/incoherency/unwanted inferences) found in the ontology. Hence, either the deletion or the proper modification of all these faulty axioms is necessary to repair the ontology.
- During the localization of the faulty axioms in the ontology the user of the Ontology Debugger is not required to analyze
- which entailments (or statements) do or do not hold or why certain entailments (or statements) do or do not hold in the faulty input ontology or
- why exactly the input ontology is faulty.
- The user is just assumed to answer questions about what should be true or not true in the intended ontology or domain model, respectively. Given the answers, the Ontology Debugger will return what must be repaired in the faulty input ontology.
- During the localization of the faulty axioms in the ontology the user of the Ontology Debugger is not required to analyze
We recommend our YouTube Channel which contains video tutorials on how to use OntoDebug in the ontology engineering process.
Check our 30 minutes video tutorial with an overview of the features of OntoDebug presented by Patrick Rodler
In the following tutorial we want to show you
- the features of the Ontology Debugger Plug-In and
- how to use it to find the faulty axioms in the consistent but incoherent Koala ontology.
Step 1: Load the ontology
File->Open from URL... and enter the URL http://protege.stanford.edu/ontologies/koala.owl or select the appropriate URL from the Bookmarks.
For our tutorial select the incoherent Koala ontology from the Bookmarks
Step 2: Select a reasoner
The process of formulating queries and finding the faulty axioms the Ontology Debugger requires the usage of a reasoner. You have to select a reasoner such as HermiT, Pellet, FaCT++ or ELK to be able to start a debugging session.
For our tutorial let us choose the already installed Hermit Reasoner in the
The Ontology Debugger requires a reasoner. For our tutorial we choose HermiT.
Please note that reasoners use different techniques to reason over ontologies. Therefore one reasoner may perform better than others on a specific ontology and worse on another ontology. Sometimes it might be helpful to select a different reasoner if the debugging is too time consuming or is throwing exceptions and errors (for more details about reasoners performance prediction see ).
Step 3: Open the Ontology Debugger Tab
Once you loaded the Koala ontology, you can open the Ontology Debugger Tab by selecting
Tools->Debug Ontology... in the menu. You can also open the tab by selecting
Window->Tabs->Debugger. The initial layout of the Ontology Debugger should look similar to this screenshot:
The Ontology Debugger Tab after the Koala ontology has been opened
Note: if you do not see this layout or if you see errors then the layout has changed with a new version of the debugger. Please select
Window->Reset selected tab to default state in such a case to display the default layout.
Step 4: Understanding the layout
The Ontology Debugger is divided into three sections where each one is responsible for the presentation and manipulation of different information during the debugging session. In the following we will describe these sections and their meaning.
The Input Ontology View
The rightmost section shows us a list of all logical axioms from the the currently used Input Ontology separating them between the list of Possibly Faulty Axioms (the Knowledge Base or KB) and the list of Correct Axioms (also called the Background Knowledge).
Possibly Faulty Axioms
The list of Possibly Faulty Axioms represents the set of axioms that might be erroneous and thus might be the cause for the inconsistencies / incoherencies in your ontology. A subset of these axiom will finally be proposed as the possible repair set to resolve the ontology's inconsistency / incoherency. When loading or creating a new ontology, by default all logical axioms are assumed to be possibly faulty. The user can then decide if some of these axioms are correct.
Correct Axioms are axioms that are assumed to be correct. Such axioms have to be identified by the user herself before starting a new debugging session.
Modifying The Input Ontology
Note: When loading the Koala ontology the debugger assumes all logical axioms as Possibly Faulty Axioms. Thus all 42 logical axioms are Possibly Faulty axioms. As long as the Debugging Session has not been started, the user can modify the two lists by clicking on the icons and to assume axioms to be either possibly faulty or correct, respectively. Alternatively the user can select one or more axioms and either drag'n drop the selected axioms to the target or use the arrow buttons (use CTRL-A to select all axioms). Additionally the user can filter the Possibly Faulty Axioms by using the search tool bar in order to reduce the list of shown axioms.
There are many ways to modify the Input Ontology
The leftmost part shows us the Queries view - the section the user will interact with the debugger the most during a debugging session.
In the Queries view we have 3 buttons that control the debugging session:
- Once an ontology is loaded we have to Start a new debugging session to start the interaction with the user. Once a debugging session has been started, the start-button changes to Restart. This enables the user to start the debugging session again.
- The Stop-button stops a running debugging session. As long as there is no running session this button is disabled (greyed out)
- The Submit-button is used to answer a query. This button is greyed out as long as a session has not been started and - once started - as long as the user has not yet classified any axiom in the query as entailed or non-entailed.
Possible Ontology Repairs
The Possible Ontology Repairs below the Queries shows the user possible candidate axioms as soon as we start a new debugging session.
Test Cases View
The mid section shows us the acquired and saved test cases of the current debugging session.
Acquired Test Cases
Acquired Test Cases are axioms that have already been answered by the user in previous queries. They are either categorized as Entailed Test Cases when they have been classified by the user as correct statements in the ontology or as Non Entailed Testcases when they must not be entailed in (these statements are not correct) the ontology according to the user's answer.
In the image below we see an example with three Entailed Testcases and one Non Entailed Testcase which have been acquired during the interaction with the user.
The Acquired Test Cases show us the answers - the yellow background color highlights an inferred axiom
Note: when you restart or stop a debugging session that already contains given answers (i.e. Acquired Test Cases) then user will be asked to save these given answers in order to continue the session later with the already given answers.
Saved Test Cases
Next to the set of Acquired Test Cases you have the Saved Test Cases showing either manually added or previously saved acquired test cases.
While the Aquired Test Cases lists the answers the user has given in the current debugging session, the Saved Test Cases list either manually added axioms (see next section Test Driven Development) or list acquired axioms from previous debugging session that the user wants to be reused. In order to store Saved Test Cases permanently they are stored as ontology annotations. These changes to the ontology annotations has the effect that the ontology has changed and the user is asked to store the changes once she closes Protégé.
Note that the manual addition of test cases is not possible during a running debugging session.
This screenshot shows a handcrafted non entailed test case next to two saved entailed test cases from a previous debugging session
Test Driven Development
The Saved Test Cases View also enables the ontology engineer a test driven ontology development approach. The underlying principle is similar to the test driven software development: the user specifies test cases for the ontology before, during or after it's development.
By manually creating an entailed test case the user specifies a constraint (axiom) that MUST be satisfied in the intended ontology. It either has to be asserted or inferred by a reasoner.
By manually creating a non-entailed test case the user specifies a statement (axiom) that MUST NOT be satisfied in the intended ontology. That is if the given ontology is correct, this non-entailed test case must not be inferred by the reasoner.
In such a way the ontology engineer can explicitly express his or her intention for the ontology. Also consistent ontologies can be tested by the creation of such test cases. Note that the underlying ontology will not be modified by adding test cases manually. Only in the case of a non-entailed positive test case the user will get the option to add this test case to the ontology with an add button. If we start a new debugging session these manually crafted test cases are taken into account in the query generation in order to get a repair that will fulfill the intended ontology.
As an example let's create a positive test case that is entailed by the given ontology:
Student SubClassOf Person and a negative test case that is not entailed by the given ontology:
Person SubClassOf Marsupials.
The user can now evaluate whether these test cases are fulfilled by pressing on the evaluate icons The evaluation will result in a green background highlighting indicating the fulfillment of these test cases.
On the other hand the user can add test cases that contradict the given ontology. So let us add
Koala SubClassOf Person as a new entailed test case and
Forest SubClassOf Habitat as a negative test case.
We now get two new evaluation result types that are also possible: the negative test case
Forest SubClassOf Habitat is highlighted with a red background since this axiom is already asserted in the ontology.
Koala SubClassOf Person should be expected to be red too but here we get a warning icon .
The reason is that the axiom consists of at least one unsatisfiable class (Koala) and thus nothing can be said about this test case.
This screenshot shows the four handcrafted test cases with all evaluation outcomes that are possible
Step 5: Starting a new Debugging Session
Let's press the button Start to start a new debugging session!
The ontology debugger first checks if the ontology is consistent and coherent.
If you debug a consistent and coherent ontology, the debugger recognizes the correctness of the ontology and informs the user about the correctness of her ontology. The camera ontology (http://protege.stanford.edu/ontologies/camera.owl) for example would be such a consistent and coherent ontology - our Ontology Debugger would recognize it, notify the user and no further debugging would be necessary.
A information pops up if the ontology meets the requirements (coherency and/or consistency)
Since our Koala ontology is incoherent it will present us the following two statements as part of the first query:
In the Queries view you now will see these two axioms:
Note: if you got other axioms, then you probably have selected different preference values and/or have selected a different reasoner - use the default prefence values (see below) and HermiT for this tutorial
The Possible Ontology Repairs View
Let us ignore these two axioms at first and let us concentrate on the lower part presenting the different Possible Ontology Repairs.
Repairs are possible erroneous axioms - we are finished when we get one such repair
These Possible Ontology Repairs represent sets of possibly faulty axioms and are calculated once the session has been started and are recalculated each time after the user submits a query answer.
If only one Possible Ontology Repair has been found then we have identified the faulty axioms that altogether explain the inconsistency and/ or the incoherency in the ontology and the debugging session is finished.
In our example however there have been identified multiple Possible Ontology Repairs by the Ontology Debugger. In detail our debugger found 9 Possible Ontology Repairs because we defined in our preferences that the debugger should stop the search for Possible Ontology Repairs after maximal 9 found diagnoses.
What is a Possible Ontology Repair? It means that, given the current information, each one of these sets of possible faulty axioms (on its own) constitutes a possible explanation of the faulty ontology. Each additionally answered query will narrow down the Possible Ontology Repairs.
Note the repair icon that comes with every possible ontology repair. This icon enables a per-axiom modification or deletion of each found repair set of axioms. A detailed description of this feature will be presented later, when a final repair is fond.
Each Possible Ontology Repair consisting of a set of faulty axioms is calculated based on so called minimal conflicts sets. A minimal conflict set is a minimal (or irreducible) inconsistent / incoherent set of axioms in the inconsistent / incoherent ontology.
The minimal conflict sets can be viewed in the Ontology Debugger's Conflicts view (you can enable it with
Note that the conflict sets can only be calculated using HS-Tree and HS-DAG as Diagnosis Engine Type (selectable in the debuggers preferences).
Step 6: Answer the first query
Continuing with the session from above, the Queries view shows us the following statements to be answered:
hasDegree Domain Person
isHardWorking Domain Person.
These two axioms are to be understood as questions generated from the Ontology Debugger given to us (for this introduction let us assume that we are experts in the domain of Marsupials).
Note that the questions either may be axioms already stated in the ontology or axioms that are logically inferred in our ontology. Logical inferences are consequences that follow from explicit axioms in the ontology. In this particular case these two axioms are indeed explicitly defined in the ontology. They can be found in the set of Possibly Faulty Axioms section of the Input Ontology view (you can use the search function). Inferred axioms are highlighted in a yellow background color.
If she is not sure about the correct answer she can leave the question unanswered by deselecting already given answers.
Note: in order to proceed the user has to answer at least one question in the current set of queries, it is not necessary to answer all queries. If the user is unsure about the correctness of an axiom she can leave this statement unanswered.
In our example we assume that both axioms are correct since only persons can have a degree, i.e. the domain of
Person, and we assume that only persons can be hard-working, i.e. the domain of
Person as well. Please note that as soon as the user answers/classifies one axiom the submit button becomes active - since, as previously stated, the user is not forced to answer all queries.
Answer both statements with YES and press the SUBMIT button
Acknowledging this decision by pressing Submit causes the Ontology Debugger to take the users answers into account for the calculation of a new set of repairs and a new query.
Since the calculation may take some time - this heavily depends on the size and complexity of the ontology, on the preferred number of maximal ontology repairs, the selected reasoner and other influencing factors - a window pops up to inform the user about the progress of the calculation.
Note: To reduce this complexity, module extraction is enabled by default. Once a debugging session starts the module extraction comes into play and reduces the set of possibly faulty axioms to a fragment of possibly faulty axioms that still preserve the inconsistency/incoherency. In our case the set of initially 42 possibly faulty axioms has been reduced to 20 as soon as the debugging session has been started. Module extraction can be turned on/off in the preference settings.
After the calculation has finished, the user is presented a new query and a new set of possible repairs (see below).
The given answers are listed as Entailed Testcases in the Acquired Test Cases View in the mid section, since the user answered with YES. Negatively answered statements are listed as Non Entailed Testcases.
A new query with a set of statements (including an inferred one) is presented to the user
Step 7: Answer all queries until the Debugger gives us a solution
Since the Ontology Debugger has not found a unique solution yet (i.e. there are still multiple Possible Ontology Repairs), the set of faulty axiom sets is updated based on the given query answer after the user presses the Submit-button. This update involves the deletion of faulty axiom sets inconsistent with the given answer and the generation of some new possible faulty axiom sets as a basis for the computation of the next query. If we continue answering the stated questions of the debugger we will finally end up with one Possible Ontology Repair (also known as diagnosis) corresponding to our Acquired Test Cases and Input Ontology.
Note that (1) performing adequate modifications to all the axioms in the obtained set of faulty axioms from the Possible Ontology Repair or (2) deleting all these axioms from the ontology is the only possible way of repairing the inconsistency / incoherency of the Input Ontology in the light of the query answers given during the debugging session. In other words, among all possible repairs of the Input Ontology, the one obtained at the end of the debugging session is the only one reflecting your desired domain model, i.e. in this case the domain model of Marsupials you believe is the correct one.
At the end of a debugging session the user gets notified that a Repair has been found
As it can be seen in the image above, our debugging session ended up with a Possible Ontology Repair consisting of 3 faulty axioms:
KoalaWithPhD EquivalentTo Koala and (hasDegree value PhD)
Quokka SubClassOf isHardWorking value true
Koala SubClassOf isHardWorking value false.
You can reproduce the provided solution either by taking a look at the given answers in the Acquired test cases view in the mid section or by opening the Answer History View which lists the given answers per querying-answering-iteration of our debugging session (see picture above).
Note that the axioms
Quokka SubClassOf Personas well as
Koala DisjointWith Person among the acquired test cases are axioms inferred by the Ontology Debugger that have been presented in queries. That is, these axioms are not listed in the Input Ontology.
The debugging session described in this tutorial can be seen in the following animation:
The animated debugging session described in the tutorial
Step 8: Repair the ontology
Once an ontology repair has been identified - either by the debugging process itself or as soon as the user identifies the problematic axioms during the debugging session by herself - the user can repair the problematic axioms by pressing the repair icon .
The icon opens a repair interface to manually change or delete the problematic axioms in order to resolve the inconsistency or incoherency in the ontology.
The repair interface supports the user in the repair process by giving her the opportunity to delete and edit the axioms of the found repair.
In addition the user can get an explanation (by pressing on ) of why the currently selected axiom is problematic. There are two possible reasons why an axiom from the ontology repair may be problematic:
- either it is responsible for an inconsistency/incoherency
- or it is responsible for the entailment of at least one axiom that has been explicitly declared as non-entailed by the user.
- Otherwise the axiom has been repaired.
Such an explanation is supposed to give the user a hint of how the selected axiom of a repair contributes to the faults and what to change in the axiom in order to obtain a correct ontology.
From the example shown above the explanation for the selected axiom
Quokka SubClassOf isHardWorking value true shows us that this axiom is responsible for an inconsistency/incoherency (expressed by owl:Thing SubClassOf owl:Nothing) and lists some axioms explaining this inconsistency/incoherency. Analysis of these elucidates that a
Quokka is a
Marsupial (axiom 5) and a
Person (axiom 2 and 4) simultaneously. However, since the Marsupial and Person are disjoint classes (axiom 1), the Quokka class is unsatisfiable.
We can either change the axioms in such a way that the fault is resolved by editing the axiom or simply by deleting the axiom . Deleted axiom are shown in grey color with a leading comment symbol (//).
The user can commit the changes by pressing the OK button. Pressing the cancel button ignores any changes. Once the changes are committed to the ontology the debugger checks if the ontology is consistent/coherent and either informas the user that the ontology is correct or restarts a new debugging session if there are still problems with the correctness of the ontology.
Step 9: Searching the knowledge base
We can verify that the axiom
Quokka SubClassOf Personis indeed an inferred axiom by searching the knowledge base (possibly faulty axioms) for axioms containing the search expression
Quokka using the search bar in the input ontology view.
The search can be either case sensitive, or restricted to have whole words or ignoring white spaces or a combination of these. Additionally we can search for axioms of a specific type.
Preference settings for the Ontology Debugger Plug-In in Protégé
In the above debugging session the default settings and the HermiT reasoner were used. You can change these settings in the debugger preferences in
Preferences for Fault Localization
Diagnosis Engines are responsible for finding the Possible Ontology Repairs (or diagnoses). As Diagnosis Engines we can choose either between HS-Tree (default) and Inv-HS-Tree. The last algorithm computes the Possible Ontology Repairs (or diagnoses) directly, i.e. without the computation of minimal conflicts, and is the preferred choice if you want to find only a few diagnoses very fast. The other algorithm is based on the calculation of conflicts and are preferable if you want to find the diagnoses in a best-first order (w.r.t. some repair preference order, see below). In other words, Inv-HS-Tree tends to exhibit the least wait for the user between two consecutive queries whereas the other engine always show the best possible sets of faulty axioms to the user in the Possible Ontology Repairs view.
Repair Preference Order
The diagnosis engine HS-Tree requires Preference Functions for the generation of Possible Ontology Repairs (diagnoses).
The user can choose between three functions: EqualCosts equalizes all repairs (no preference), Cardinality (default) prefers repairs with lower numbers of axioms per repair, and Syntax prefers repairs with higher probability where the probability of a repair is calculated according to the occurrence of syntactic keywords in the axioms.
Next you can choose how many repairs you want the debugger to calculate at most (default is 9) before providing the next query. Please note that the higher the number, the lower the overall number of queries might be in a debugging session, but the more computation time might be required between two queries depending on the complexity of the ontology.
If you want to debug an incoherent ontology, the process quite often can be sped up by reducing the incoherency to inconsistency -- by adding a new individual to every unsatisfiable class. By default this option is enabled, since we normally want to be able to debug both inconsistent and incoherent ontologies.
In addition by using module extraction the process can once again be sped up as the search space in problematic ontologies can be reduced dramatically. By default this option is enabled.
Preferences for Query computation
age 1 has as goal the minimization of the overall number of queries in the debugging session. There are several measures for this purpose (default: Entropy).
Query Quality Measure
You can define Query Quality Measure for query selection such as Entropy (default), Split in Half and RIO. These aim at minimizing the number of queries to be answered by the interacting user until obtaining the correct repair.
Approximation parameters control the quality of the found query w.r.t. the selected measure. Lower values signalize better quality but with a potentially longer query computation time. (Default for Entropy: 0.5, Cardinality: 0 and Cautious 0.4).
Stage 2 has as goal the minimization of the number of axioms per query (MinCard) or the complexity of the query axioms (MinSum: tries to minimize the overall complexity of all query axioms; MinMax: tries to min. the complexity of the most complex query axiom). Axiom complexity is estimated from syntax fault probabilities - the higher the estimated fault prob. of the axiom, the higher the estimated complexity of it. There are several Query Quality Measures for this purpose (default: Entropy).
Stage 3 tries to make the query easier to understand for the user by simplifying axioms in the query. Users can select axiom types considered easy. These are used to enrich the query of Stage 2 first. Second the enriched query is optimized, i.e. a smallest and easiest possible subset of it is determined.
These default preferences were used for this tutorial.
Conflict Searcher: Cardinality
Engine Type: HS-Tree
Preference Function: Cardinality
Max. Number of Computed Ontology Repairs per Iteration: 9
also repair incoherency: yes
enable module extraction: yes
Query Quality Measure: Entropy
Cardinality Threshold: 0 (when active)
Cautious Parameter: 0,4 (when active)
enrich and optimize query: yes
 Uli Sattler,Robert Stevens,Phillip Lord (2013) (I can’t get no) satisfiability. Ontogenesis. http://ontogenesis.knowledgeblog.org/1329
 V. Sazonau "Performance Prediction of OWL Reasoners" Master's thesis, The University of Manchester PDF, http://www.cs.man.ac.uk/~sazonauv/SazonauThesis.pdf