Research-grade vision tooling

Animal behavior analysis with a cleaner operational surface.

Annolid brings annotation, segmentation, tracking, pose estimation, and behavior analysis into one practical environment for labs that need reproducible results without a brittle pile of disconnected tools.

One workspace Annotation, segmentation, tracking, and review in one lab-ready flow.
Human-in-the-loop Correct predictions quickly instead of accepting opaque model output.
Research-ready outputs Keep results structured enough for reruns and downstream analysis.
Frames Annotation Segmentation Tracking Behavior Export

Built for the full analysis loop

The landing page should communicate the same thing as the software: operational clarity, modern computer vision workflows, and tools built for real experiments instead of isolated benchmark demos.

Annotation

Precise labels without workflow drag

Create and refine instance labels, masks, and keypoints with tooling designed for iterative review instead of one-pass labeling.

Segmentation

Practical model workflows

Run segmentation pipelines that support research iteration while keeping outputs and review state understandable.

Tracking

Identity continuity in difficult scenes

Handle overlap, motion, and interaction-heavy videos with workflows that prioritize correction, visibility, and reproducibility.

Choose the path that matches your team

Annolid is useful whether you are curating training data, validating model output, or building a repeatable analysis system for a lab or shared facility.

For labeling teams

Curate high-quality training data

Build clean annotation sets for detection, segmentation, pose, and behavior workflows.

  • Frame extraction and review
  • Interactive correction loops
  • Export-friendly shape semantics
For model operators

Validate pipelines against real experiments

Test inference and post-processing in the same environment where review and comparison actually happen.

  • Segmentation and tracking evaluation
  • Video batch workflows
  • Human-in-the-loop quality control
For labs

Turn ad hoc analysis into a system

Create a repeatable path from acquisition to export so collaborators can reproduce results instead of rebuilding them.

  • Operational docs and tutorials
  • Structured outputs for downstream analysis
  • Research-ready deployment surface

Use the landing page as a front door, not a detour.

Start with installation if you want to run Annolid now, or go directly to the docs portal if you already know the workflow you need.

Cite Annolid

If Annolid supports your work, please cite one or more of the references below.

Yang C, Cleland TA. Annolid: Annotate, Segment, and Track Anything You Need. arXiv:2403.18690 (2024).

@misc{yang2024annolid, title={Annolid: Annotate, Segment, and Track Anything You Need}, author={Chen Yang and Thomas A. Cleland}, year={2024}, eprint={2403.18690}, archivePrefix={arXiv}, primaryClass={cs.CV} }

Yang C, Forest J, Einhorn M, Cleland TA. Automated Behavioral Analysis Using Instance Segmentation. arXiv:2312.07723 (2023).

@article{yang2023automated, title={Automated Behavioral Analysis Using Instance Segmentation}, author={Yang, Chen and Forest, Jeremy and Einhorn, Matthew and Cleland, Thomas A}, journal={arXiv preprint arXiv:2312.07723}, year={2023} }