A private research initiative exploring AI, technology, construction, and beyond.
This research explores a next-generation computer chip architecture purpose-built for artificial intelligence. Designed from the ground up with AI in mind, this new approach delivers faster, more efficient performance — and has the potential to reshape how the industry powers advanced AI systems.
A key breakthrough in this work is streamlining the math used for AI. By proving transformer models can run with leaner calculations, it opens the door to faster, more efficient, and simpler hardware — and a future beyond the limits of traditional compute designs.
Trained a GPT-style transformer using a radically simplified computation approach.
This is a potentially world-first milestone that shows large transformer models can be trained with far leaner, more efficient math. It marks a foundational shift in how these models can be built and deployed, enabling faster, simpler, lower-power hardware across the AI stack.
Early benchmarks on MNIST and CIFAR-10 confirmed viability.
These tests were intended as a quick “fail-fast” check before committing deeper development effort.
Custom MAC design outperforms HardFloat 32-bit across key metrics.
This apples-to-apples baseline shows how a redesigned MAC can deliver significant gains over standard FP32 implementations — improving speed, power, and area simultaneously to enable more efficient AI hardware.
~10× minimum overall improvement projected over today's best AI hardware.
Unlike conventional approaches that inherit significant complexity and cost, this design emphasizes simpler, more efficient computation throughout. These advantages scale across billions of operations, enabling cheaper, faster, and dramatically more energy-efficient AI accelerators. This strategy supports a credible path to ~10× minimum overall improvements over today's best hardware.
For a 1 GW datacenter with 50% AI load, 10× efficiency reduces AI power from 500 MW to 50 MW. At $0.075/kWh (typical U.S. commercial rate), annual energy costs drop from ~$328M to ~$33M, saving ~$295M yearly.
Implemented entirely with open-source tools (OpenLane, Yosys) and no architectural optimization.
These results were achieved without fine-tuning ‐ suggesting significant headroom remains for even greater gains with dedicated design effort.
Design is radically simpler, accelerating the path from concept to silicon.
Simplified hardware reduces development cost, improves verification timelines, and lowers barriers to innovation across the AI hardware stack.
These accomplishments point to a foundational shift in AI compute ‐ impacting everything from chip design to data center infrastructure and beyond.
* Supporting documentation is available upon request for qualified technical reviewers, collaborators, or investors.
In engineering, we talk about the Performance, Power, Area (PPA) triangle — you typically optimize for two at the expense of the third. But if you break that triangle, delivering better performance, lower power, and smaller size at the same time, you’ve created something truly disruptive.
That’s what this research explores: new ways to train and run AI models with much simpler, more efficient computation. The results point to a future where chips can outperform today's best (like the Nvidia H100), with less power, less silicon, and dramatically lower cost.
This unlocks high-performance AI not just in datacenters — but in edge devices: PCs, laptops, wearables, even autonomous vehicles. It brings real-time AI training and inference into reach for countless applications.
Datacenter disruption
Drastically lower power, cost, and footprint for AI training and inference at scale.
Self-learning vehicles
Real-time training onboard — no cloud required, adaptive to every road and driver.
Private AI assistants
HAL9000-style intelligence that runs locally — always on, always personal, never shared.
Ambient edge AI
Wearables, implants, or sensors that learn and respond — without needing the cloud or even batteries.
Advancement in medicine
Enables real-time diagnostics, personalized treatment, and accelerated drug discovery — transforming how care is delivered and scaled.
Freeman Constructs is a personal innovation lab and future-facing brand for private research, engineering, and venture exploration. It includes work in AI, tech architecture, construction concepts, and investment strategy. Current focus is on a hardware-level breakthrough in AI compute: full transformer training using integer arithmetic ‐ a potential paradigm shift in energy-efficient AI hardware.
Aaron Freeman is a lifelong technologist and entrepreneur. He began coding at age 8, launched a BBS in 1983, and later earned a B.S. in Computer Science and an M.S. in Electrical Engineering (with a focus on computer architecture) from Wichita State University.
While pursuing his Ph.D. he was awarded a Teaching Fellowship, where he taught various programming, computer architecture, genetic algorithms, and other courses. He founded Adroit, a consulting firm serving major clients including AmeriServe (formerly PepsiCo Food Services). He was awarded a U.S. patent for the first internet-connected sprinkler system (US-7123993-B1) and founded SendThisFile, the first file-transfer-as-a-service platform.
He and his wife Larissa are active investors and entrepreneurs across sectors including retail, real estate, and infrastructure. He has led development efforts in robotics, cloud, real estate, trading, and AI.