Let me share a story. A couple years ago, I hired an intern, let's call him Bobby. Bobby's first task was to do some data entry. We had a book that had over 100 doctors with their specialties, information about their clinics, a bio and some other things and we needed to enter all of that information into something we could search and reference on a website.
"Bobby, take this book and load it into a CSV for me, would ya?"
I checked back in on Bobby later that day and he said he'd finished the work. Great! So I check it out. Bobby is a smart kid, he went on to do great work later on, but he'd taken pictures of the pages and put each of them into a cell in a spreadsheet!
I remind myself of this story with new employees and now with AI. You must teach someone (or AI) how to execute basic tasks before asking them to do more complex tasks that build on those basic tasks. For example, if I'd taught Bobby about the structure of the CSV I wanted and even demonstrated how to load in one single page, he probably would have had no problem with the rest.
In AI, especially with Agent-type workflows, I call this Bottoms-Up AI. Create the building blocks of tasks that you want the AI to know, then let AI build skills and even mastery on top of those tasks.
A lot of earlier agent frameworks would start with the high-level task it'd been assigned, then plan around that and assign itself tasks to run. Top-Down. For example, a similar goal might be something like "Research doctors on this hospital's website and put it all into a CSV". Would you trust AI to blindly do that? I wouldn't, especially unsupervised. In a conversation with AI, I can continue adjusting and steering it in the right direction. But in something that's automated like "watch this website and keep this csv updated with all of the doctors you find", this could go off the rails pretty quickly. Teaching the AI first how to browse the website, what the CSV structure should look like, how to identify if someone is or isn't a doctor and which fields to capture would do wonders in having high-quality output that runs over and over. And each task would be testable in isolation so it's easier to make sure it's done right and fix issues in the logic if something is wrong.
I've found that having some really well-defined concrete tasks to be the game-changer in how an Agent works. Teach it to do those things well, use them as building blocks, make sure it doesn't attempt to do things it doesn't know how to do (you can have it tell you when it doesn't know how to do something instead of guessing).
Teach tasks, tell it to do those tasks. Add more tasks until it's a skill. Eventually enough skills is mastery.
Starting with a vague ask (Top-Down) and expecting AI to plan and decide and execute tasks on its own has really only wasted my money.
So, to help make this a no-code reality, PyroPrompts is introducing the ability to Run Workflows from within a Workflow. Mindblown, I know. Create a set of Workflows that execute specific tasks, then one workflow to "Rule them all" that knows how to call the "sub" workflows. Now in Closed Beta, contact us for access so we can learn more about your goals.
What's your experience with this? Is the up-front work of teaching AI how to run some specific tasks worth it or are you happy with it spinning its wheels, searching for a solution?