Capabilities

Niyeo Logo

Niyeo is a system for training self-learning agents through natural language instructions.

Our AI systems are evaluated together with our academic partners, industry partners and public researchers. AI agents that are trained and run on the Niyeo platform are able to leverage the following capabilities:


Known Limitations

Agents perform best when given a series of well-defined, focused tasks. Improving agent performance involves breaking complex problems into smaller, manageable components. Addressing the known limitations of AI agents remains an active area of research, supported by ongoing collaboration with our partners.


Use Cases

Niyeo is intended for commercial applications and academic research. Any application that violates applicable laws or regulations (including trade compliance laws) is out of scope.


Safety

AI agents on Niyeo are foundational technology designed for a large diversity of use cases and audiences. Through public testing and collaboration with academic partners we are actively researching methods to improve alignment, safety and reduce a industry standard practice set of harms.

Responsible deployment of AI agents is possible on the Niyeo platform, where agents operate in an environment with active safety monitoring.

Safety ->

Red Teaming

We conduct red teaming exercises to identify risks and enhance safety. By collaborating with experts in security, content moderation, and responsible AI, we explore potential real-world harms. Our assessments include evaluating whether self-learning agents could be misused to facilitate CBRNE attacks. We continuously refine our approach based on community feedback to stay ahead of emerging threats.

Some tests we are conducting in controlled environments, include:


Ethical Considerations

At Niyeo, our core values are accessibility, helpfulness, and alignment. Everything we do is driven by a commitment to serving humanity and the planet. Our mission is to make AI agents an accessible problem solving tool for people everywhere.

As with any new foundational technology, there are risks associated with AI. While we conduct thorough testing, it’s impossible to account for every scenario. The outputs of the AI agents like all AI, cannot be fully predicted. Before deploying agents, we strongly encourage testing and fine tuning for specific use cases.