Details

Details

Relevant for: Organisation Administrators, Administrators (see "User Roles in the Cockpit").
You have clicked on "Insights" in the Cockpit header, selected filter options and clicked on "Details" under "Benchmark - Clusters and Competencies by Exercise".
Here you will get an exercise-specific evaluation of the ratings for the selected competency, across all Assessments, Candidates and Observers. A differentiation is made between the competency level and the Behavior Anchor level. The Insights provide various statistics for each individual exercise in which the selected competency and Behavior Anchors were rated.

At the top, you will first see the competency you selected to display the exercise-specific details. Next to it, the Competency Cluster to which the selected competency is assigned is displayed in gray. 
If you have not stored a Competency Cluster, you will only see the name of the selected competency here.
Below that, you will see the respective exercise with the details at the competency level and at the Behavior Anchor level. This structure is repeated for all exercises in which the selected competency was observed and rated.



Statistics - Competency

Under "Statistics - Competency" you can see three statistical indicators of the ratings given for the entire competency:

  1. Agreement
  2. Observability
  3. Internal consistency
The statistical indicators are explained in detail later in this article. However, you can also click on the "info-i" next to the statistics at any time to see an explanation. 

Statistics - Behavior Anchor

Under "Statistics - Behavior Anchors" you can see various statistical indicators of the ratings given for the individual Behavior Anchors of a competency: 

  1. Mean value
  2. Standard deviation 
  3. Agreement
  4. Observability

Internal consistency is not calculated here, as it is not very meaningful in this context.  

The left side of the table lists the "Behavior Anchors" individually that were observed and rated in the exercises and the selected competency in the Assessments. The header lists the presented statistical indicators individually that were performed and observed in the Assessments. In the table you can see the values of each Behavior Anchor of a competency in an exercise (competency and exercise specific). 



The indicators mentioned are explained in detail below. 

They are also displayed in color in the Insights, depending on how good the values are. This is to give you a rough guide for interpreting the results.

  1. A green marking stands for a good expression, 
  2. with a yellow marking there is a clear potential for improvement and 
  3. if there is an orange marking, the competencies or Behavior Anchors should be revised. 

Agreement

In the Applysia Insights, the Observer Agreement is automatically calculated (for all statisticians: the intraclass correlation) - both for the ratings of all competencies and the ratings of the Behavior Anchors.

The correlation is thus a measure of the extent to which the ratings of different Observers match, e.g. for a Behavior Anchor. Differences are averaged across Candidates to ensure that only general tendencies and not individual profiles influence the data. If, for example, an Anchor has a very low agreement, this could be an indication that it is not formulated clearly enough and thus leads to misunderstandings among the Observers as to what exactly is to be rated. It can happen that Observers give very different ratings, although they have observed the Candidate at the same time. Here, the highest possible level of proficiency is desirable. 

In short: How high is the agreement (of the ratings) of the different Observers?


Observability

Observability indicates whether a competency or Behavior Anchor can be observed at all in an exercise. If, for example, half of all Observers do not rate a Behavior Anchor in an exercise, this could indicate that this Behavior Anchor often simply cannot be (clearly) observed in this exercise. This is then a good starting point to specifically evaluate the design of the exercises, to revise them if necessary, and thus to (clearly) rate the desired competencies. The goal here should therefore be to achieve 100% proficiency. 

In short: Were all competencies and Behavior Anchors rated by all Observers? 


Internal consistency

The internal consistency (for all statisticians: here Cronbach's Alpha) helps to assess to what extent the Behavior Anchors are suitable for assessing a related competency. A high value means that the Anchors are strongly related and thus probably suitable for rating the same competency. If the value is low, at least one of the Anchors probably does not fit the others. This can be the case in particular with quite broad competency classes and "combined" competencies such as "Strategy and Action competency". In principle, lower values do not mean that the Behavior Anchors are "bad", but rather that one should take a closer look here, because it could also make sense to divide up the competency, for example.

In short: Do the individual Behavior Anchors match and measure the same competency? 





    • Related Articles

    • User roles in the Cockpit

      In the browser-based Cockpit you can manage your Assessments & Templates, the Users of the Software as well as your Workspace. Not every Applysia User has access to the Cockpit, this depends on the User role you have been assigned (for many Users the ...
    • Status

      Relevant for: Organisation Administrators, Administrators (see "User Roles in the Cockpit"). You have selected an Assessment to edit and are under "Status" in the navigation menu on the left. Here, you are given an overview of your Assessment. You ...
    • Consolidated Matrix

      Relevant for: Normal Observers, Lead Observers, Moderators (see "Observer roles in the App") In the "Consolidated Matrix" area you can directly compare the consolidated values of the individual Candidates in the competencies. If you have activated ...
    • Insights overview

      Relevant for: Organisation Administrators, Administrators (see "User Roles in the Cockpit"). With Applysia Insights, we enable insights into your Assessment processes without the tedious collection and evaluation of data. Initially, we focus ...
    • Conference overview

      Relevant for: Normal Observers, Lead Observers, Moderators (see "Observer roles in the App") In the Assessment Overview at the bottom of the Schedule, under "Conference", click "Start Conference" and a new page will open. If the Assessment is still ...