LSU Research Insights: Next-Generation AI Aims for Smarter Security With Better Energy Efficiency
January 06, 2026
It’s become impossible not to associate the future with artificial intelligence. AI and machine learning technologies are poised to make as large an impact on human life as the Industrial Revolution did, if not a larger one.
2026 & Beyond
As we enter a new year of research and discoveries, our LSU experts are looking forward to the biggest challenges we will face and advances we can anticipate. What might our future look like, “soonish”? How can we help to shape the future we want to see?
In this Q&A, we ask James Ghawaly Jr., an assistant professor in the LSU College of Engineering with a joint appointment in the LSU Center for Computation & Technology (CCT), about his views on the future of AI research.
“ The human brain ... is remarkably efficient. It consumes less than 20 watts on average, yet outperforms even the most advanced AI systems, which require orders of magnitude more energy. ”

James Ghawaly Jr,
LSU assistant professor
Where do you see the field of AI research going in the next 1-5 years?
Predicting the future of AI is challenging. The field moves rapidly, with new technologies emerging daily. That said, I believe we're still in the era of chasing scaling laws: models continue to improve by training on more data, with more compute, for longer periods.
Transformers, the foundational architecture behind modern language models, are remarkable in that scaling these factors reliably improves performance.
Editor’s note: A transformer is a neural network architecture widely used in large language models (LLMs) that “learns” context by tracking relationships in sequential data such as text.
Beyond continued scaling, I don't foresee any major architectural breakthroughs in the next year. More relevant to my research at the intersection of AI and security, I expect AI Security (the study of vulnerabilities and defenses in AI systems) to grow significantly.
As generative AI becomes embedded in our digital infrastructure, understanding the unique security characteristics of these systems is paramount.
What are some challenges you foresee in your AI-related research in the next few years?
AI models are becoming increasingly powerful, but this comes at the cost of significant energy consumption. Our goal is to deploy AI for security applications at the edge, that is, running security algorithms directly at the point of data collection (on local devices) rather than in the cloud.
While we want to leverage recent advances in model capabilities, we're constrained by the challenge of reliably delivering enough power to these edge devices.
Where would you LIKE to see your field go in the next 1-5 years? What are you most excited about in terms of research and discoveries in your field?
The human brain (and biological neural systems more broadly) is remarkably efficient. It consumes less than 20 watts on average, yet outperforms even the most advanced AI systems, which require orders of magnitude more energy, particularly during training.
I'd like to see the field adopt a more bio-inspired approach to neural network design. This is challenging given our incomplete understanding of how the brain works, but I believe we know enough to make significant progress.
Areas like neuromorphic computing, TinyML, and bio-inspired computing deserve far more attention. In my area of building AI models for security, this would enable the deployment of more capable models in increasingly resource-constrained environments.
Editor’s Note: Neuromorphic computing uses a hardware approach that mimics how the brain works by using artificial brain cells and synapses (the connections between brain cells) to perform computations. TinyML or Tiny Machine Learning involves deploying optimized and spechalized machine learning models on low-power, resource-constrained devices “at the edge.”
I'm excited about emerging tools that bridge computational neuroscience and AI. A recent example is Jaxley, a differentiable simulation framework published in Nature Methods that enables GPU-accelerated training of biophysically detailed neuron models using backpropagation*.
Tools like this could accelerate bio-inspired approaches to neural network design by making it easier to learn from and replicate the computational principles of real neurons. If we can model how biological systems achieve remarkable capabilities with minimal energy, we may uncover design principles that lead to more efficient AI architectures.
*Short for "backward propagation of error," backpropagation is a method for a method of training machine learning algorithms to mix their mistakes.
What do you wish more people knew about your area of research and its implications?
Two things, both intended to dispel common misunderstandings about AI that I've encountered since 2023.
First, large language models (LLMs) like ChatGPT represent only a small subset of the field. LLMs are a type of AI, but they are not representative of the whole; their widespread public availability is a recent development.
Second, AI itself is not new. The McCulloch-Pitts neuron, one of the earliest mathematical models of neural computation, was introduced in 1943. Frank Rosenblatt's perceptron followed in 1958, demonstrating that machines could learn from data.
The term "artificial intelligence" itself was coined at the Dartmouth Conference in 1956. Since then, the field has weathered multiple cycles of enthusiasm and so-called "AI winters," steadily advancing through decades of research in symbolic reasoning, expert systems, machine learning, and now deep learning.
What we're witnessing today is the democratization of generative AI, not the birth of the field.
Next Steps
Let LSU put you on a path to success! With 330+ undergraduate programs, 70 master's programs, and over 50 doctoral programs, we have a degree for you.


