Thanks for reading. I’m wondering why, when I manually annotate a document in Platform, my annotation is treated as an FN. I would think that, because I hand curated the annotation, it would be considered a TP an FN means an incorrect indication of the absence of a condition when it is actually present. I’m referring to 1) a condition is present and 2) I approve it by annotating, yet the system seems to override my decision-making process. If I annotate a term or phrase and I’ve ensured it’s in the right class, I expect a green TP.
I have attached an example where I found a corporate entity, “ONEOK,” that I manually annotated because it belongs to a certain class. However, rather than being indicated by green, as a TP, it is purple. Since this is factored into Experiment scores, it seems like a) I am being penalized for taking the time to manually annotate a document and b) the machine is overriding what I have done as an SME.
I hope this makes sense. Perhaps I need a better understanding of what is happening in the system when I manually annotate or I shouldn’t manually annotate. It’s odd to me and I’d appreciate any feedback that might clarify what is happening and, more importantly, how I am reading--correctly or incorrectly--the results of manual annotations. Thanks!

Another example where I manually categorized/annotated a document, but, again, my manual input is treated as an FN:
