When does your AI call a human?

Is hndl a GPT or Claude wrapper?

How does hndl know what to do?

What does hndl integrate with?

What does the output look like to other people?

Q: When does your AI call a human?


Our AI does most of the work using tool calls. But sometimes it calls on humans. Usually that’s one of these three scenarios

Some tasks AI is blocked from doing

For example, gathering internal data requires working with employees within the org. And in some contexts those employees don't want to work with an AI. Or there are platforms which block AIs from using them (LinkedIn, Gong, etc)

Some tasks AI can't do because of the limits of LLMs

For example, AI can create 1 slide for a deck but it struggles at creating a whole deck. Because LLMs struggle with large complex tasks in a way which humans don't

Some tasks AI does great but not consistently

For example, the AI might triage emails in a way where 80% of the emails are relevant 80% of the time. But it needs a human to check because sometimes the AI screws it up. Or because there's almost always 20% of the emails which need to be removed

Q: Is hndl a GPT or Claude wrapper?


The hardest thing we're building is a marketplace of humans who do the parts that the AI can't do well