Agents do useful work.
They search, compare, draft, run experiments, and prepare artifacts. The human job is to choose what matters and keep the standard high.
Aresalab is the public home of ARESA — Autonomous Research Engineering & Synthesis Architecture. It is my system for using agents to turn ideas into experiments, open-source tools, papers, books, and citations. The goal is simple: useful public work, produced through a repeatable research loop.
They search, compare, draft, run experiments, and prepare artifacts. The human job is to choose what matters and keep the standard high.
A project starts only when there is a useful gap: a tool missing from the world, a claim worth testing, or a system worth building.
The output is not just a page. It can be a Rust database, a dashboard, a benchmark, a simulation, a paper, or a book.
Notes, experiments, outputs, failures, code, and citations get turned into something another person can inspect and reuse.
The same workflow should produce the next artifact faster, cleaner, and with better judgment than the last one.
A lot of AI work is either too abstract to use or too rushed to trust. Aresalab is my answer to that gap: build the thing, test the thing, explain the thing, and publish enough of the process that someone else can inspect it.
The range of projects is intentional. AresaDB, simulations, dashboards, books, and papers are different outputs from the same engine. The focus is not one narrow domain. The focus is the method: agents plus engineering plus verification.
The persona is simple: a researcher-builder using agents to produce useful public work. Not everything. Not noise. A visible loop that gets better.
Every project follows the same plain loop: idea, research, build, test, publish, learn. The sixth step matters most. It is where the next useful question comes from.
Start with a real problem, not a vague topic. Is there a missing tool, a weak claim, or a useful experiment waiting to be done?
Agents help map what exists. The human job is judgment: what is actually new, needed, honest, and worth building?
Create the artifact: code, data pipeline, benchmark, dashboard, simulation, paper draft, or book chapter.
Run experiments. Check accuracy. Look for false confidence. Keep the failure cases visible instead of hiding them.
Ship a readable page, a runnable repo, and a citation path. The work should be easy to inspect, not just easy to admire.
Today, stage six is mostly human. I read the artifact, notice what is weak, and decide what should be tested next. It works, but it is also the bottleneck.
Phase 2 of ARESA is the self-improving step: an agent reads the artifact, checks claims against evidence, finds the weakest part, and proposes the next experiment. A human still decides what gets published.
That is the bet: make the work structured enough that agents can reason over it, and public enough that people can trust it. Autonomous research, compounding — the reason ARESA exists.
Aresalab is led by Yev Chuba — software engineer, researcher-builder, and graduate student at the University of Pittsburgh. The lab is how I show the work: not just what I believe, but what I can build, test, publish, and improve with agents.
I want the site to feel like a portfolio, but not a glossy one. It should show leadership, taste, technical range, judgment, and the ability to turn agent workflows into public artifacts.
The public code sits on github.com/yoreai — tools, papers, experiments, and the systems that produce them.