What is Artificial Intelligence aka AI? And What is Machine Learning?: A Summary of Five Ways That AI and Machine Learning Can Improve Your Business or Accelerate Scientific Research in 2026
If you told us back in 2021 that artificial intelligence (AI) and machine learning (ML) would become so big in 2026, we might have thought you were crazy.
But here we are in early 2026 and whether you're looking at Google Search Trends data, articles from leading media outlets like The New York Times and The Wall Street Journal, or social media trends on platforms like Instagram and Facebook, AI and machine learning are having their moment right now.
But while people in all walks of life throw around terms like “AI“, “generative AI“, “machine learning“, “deep learning“, “neutral networks“, and “AI hallucinations“, few easy-to-understand definitions and explanations of these terms exist.
And even fewer discussions exist that describe clear ways that AI and machine learning can help enhance either business processes or scientific research studies, so our team at Chestnut Hill Analytics wrote this article to cover:
East-to-understand and intuitive definitions of AI, machine learning and related concepts like generative AI, AI hallucinations, deep learning, and leading machine learning techniques
A summary of five specific examples of how AI or machine learning can improve businesses processes and enhance scientific research studies
1. What is AI More Generally? And What Are The Most Important Concepts Related to AI?
Discussions surrounding artificial intelligence have occurred over the years since the term was first coined in 1955 by early Computer Scientist, John McCarthy, but AI was mostly found in scientific journals and science fiction films, TV shows and books.
While many definitions of artificial intelligence exist, here’s an intuitive definition of AI that our team at Chestnut Hill Analytics developed:
Artificial Intelligence (AI) [noun]: The use of computer hardware and machine learning software to engage in tasks or behaviors typically requiring human intelligence.
Our definition is concise, easy-to-understand, and includes both the physical technology (i.e., computer hardware, or the actual pieces of a computer system that work together in AI tools) and the programs that run on that physical technology (i.e., computer software, or the programming code that executes commands to provide responses to human users when using AI programs).
The second part of the definition refers to AI conducting tasks that usually require human intelligence, which can range from writing articles (not this one though ;), which was written entirely by our Founder, Dr. Eric Louderback, without any AI) to creating pieces of art (like in ChatGPT’s GPT-4o "omni" model) to identifying objects in images (like via YOLOv8) to driving cars (aka autonomous vehicles, like Waymo).
ChatGPT is currently the leading AI model for text prompts, with Google’s Gemini slowly catching up.
Within AI, there are several other key concepts to define. Let’s take a look at two of the most important ones here.
The first is generative AI. Let’s take the same approach as above and provide an easy-to-understand definition of generative AI:
Generative AI (Gen AI) [noun]: The use of AI to compose text, to create a graphic or piece of art, or to make a video via the use of large existing datasets and machine learning algorithms.
Gen AI can be seen in several widely-available tools currently on the market, including Open AI’s ChatGPT and Google’s Gemini (for text generation), as well as OpenAI’s Sora 2 and Google’s Nano Banana (for image and video generation).
The second important concept related to AI is AI hallucination(s). Here’s a definition that we developed:
AI Hallucination(s) (AIH) [noun]: The tendency for AI — including generative AI — to provide false or misleading text responses, or incorrect (aka fake) images or video elements.
AI hallucinations can be costly for businesses and cause other problems, with some high-profile examples reported in major media outlets.
For example, The Guardian reported on a case in Australia from 2024 where the consulting firm Deloitte was required to pay back $440,000 (AUD) in a government contract, when their AI program made key errors and added multiple fake references to a report on IT compliance.
An article in the The New York Times from May 2025 on rates of AI hallucination among AI Chatbots found (shockingly) high rates in some cases in benchmark testing, with hallucinations occurring in almost 50% of cases.
While AI models continue to improve accuracy and are taking steps to reduce these potentially costly errors, keeping hallucinations in mind and having humans check prompt responses or conduct similar analyses themselves is always smart as a best practice.
Speaking of encountering potential issues when working with AI, you might want to read another one of our other Knowledgebase articles on a similar subject, entitled: “Top 5 Mistakes Businesses Make When Working with AI and Benefits of Real Data Analytics from Real Humans“.
2. What is Machine Learning and Deep Learning? And What Are Some Key Types of Machine Learning Models?
Machine learning models rely on large code bases and huge datasets to “train“ models on. After training, they are able to predict other variables with varying degrees of accuracy.
Now that we’ve covered AI and important AI-related definitions, let’s take a closer look one more important part of the AI definition we provided above: the concept of machine learning.
Machine learning — and a subset of machine learning known as deep learning — are crucial parts of AI and also data science methods more broadly.
Let’s continue our approach to providing concise and intuitive definitions by defining machine learning:
Machine Learning (ML) [noun]: Using computer hardware with data science software and (usually very large) training datasets to create statistical models that can predict future outcomes in (usually very large) testing datasets.
Machine learning serves as the foundation of AI and commercially-available AI programs, and is also used for human-driven data science and analytics modeling, like the services we provide here at Chestnut Hill Analytics.
Within machine learning, another key concept is deep learning, which we define as follows:
Deep Learning (DL) [noun]: A type of machine learning that utilizes artificial neural networks (defined below) with multiple layers to automate variable selection and to “learn“ from past mistakes to enhance future accuracy and improve on previous model iterations.
Deep learning is often used for tasks like image recognition and text analysis where exposure to many examples (for example, different styles of handwriting with varying levels of sloppiness) in training datasets help to enhance performance over time.
Within machine learning and deep learning, there are many specific types of models that are used to understand and predict datasets of all sizes for business, science, and other purposes.
Let’s take a look at five of the most commonly-used models:
Multiple Regression Analysis: A type of model that uses multiple independent variables and statistical equations to predict a single continuous dependent variable (for example, someone’s future salary amount) or binary dependent variable (for example, whether or not someone will buy something).
Neural Networks: A machine learning approach modeled after the human brain that uses large datasets and layers of “neurons“ representing variables and probabilities to predict one or more dependent variables.
Random Forest Decision Trees: Another machine learning approach consisting of many individual decision trees (usually 500 or more) that reduces overall errors by integrating predictions from multiple smaller models.
Support Vector Machines: A machine learning technique to predict an outcome originally derived by computer science researchers, which visualizes possible correct and incorrect predictions across a graphical plane with decision boundaries.
Bayesian Probability Models: An older machine learning model rooted in the theories of 18th Century mathematician Thomas Bayes, which seeks to predict a two-class (for example, true or false) outcome by multiplying different probabilities together.
These five models are the most common in practice. Given their high levels of accuracy and adaptability to many types of data, neural networks rooted in deep learning approaches are usually the go-to ML model for research into business processes and for scientific applications.
Artificial Neural Networks are a type of machine learning model and are loosely based on how axons and dendrites in the human brain interact. This type of model is the underlying algorithm for large language models, including ChatGPT.
3. What Are Five Specific Examples of How AI and Machine Learning Can Improve Business Processes, Marketing and Profitability?
So now that we’ve covered definitions of AI, machine learning and related concepts, let’s take a closer look at practical applications of AI and data science to business use cases.
To keep this article from taking up your entire day, we’ve provided five examples here:
Understanding how performance in past quarter(s) might impact future financial outlooks
Predicting customer or client purchasing behavior
Identifying subsets or “clusters“ of customers to assess and optimize market segmentation
Brainstorming ideas for marketing copy text and advertising graphics
Identifying areas of growth and avenues for enhanced profitability
These real-world examples of applying AI and machine learning techniques represent starting points for businesses and can of course be adapted to a variety of industries, whether we’re talking about innovative products designed by a startup company, services delivered by an established publicly-traded firm, or analytics platforms used by sports teams and entertainment venues.
While more more than five use cases certainly exist, these are five that we identified here and we’d be happy to provide a free consult regarding other potential use cases if you’re interested (please feel free to fill out our contact form or email us at info@chestnuthillanalytics.com).
AI and machine learning models can provide a variety of benefits for businesses of all types, including companies that sell products to customers and those that provide services to clients.
4. What are Five Examples of How AI and Machine Learning Can Advance Scientific Research Goals?
Now that we’ve covered business applications, let’s discuss five uses for AI and data science in scientific research studies.
Just like we did above, we’ve broken these practical applications in science into five examples for clarity and ease of organization:
Predicting whether or not participants might get a certain disease in the future
Identifying different clusters of people based on multiple variables from survey responses
Graphically visualizing the most important variables that predict a dependent variable
Testing how patient visits, quality of interaction and satisfaction influence future health care utilization and prescription drug use
Evaluating how different public health advertising programs impact future health behaviors in different socioeconomic groups
As you can see, AI and machine learning are extremely versatile and adaptable to a wide variety of research questions and scientific domains, which presents incredible possibilities for research and grant applications, especially as these tools continue to improve in the future.
Our Team at Chestnut Hill Analytics has many years of experience with publishing peer-reviewed scientific articles using these and other data science techniques, and they are always learning more about these approaches and cutting-edge innovations in data science and analytics.
As scientific research is always evolving, there are undoubtedly many other uses for AI and machine learning in science, and we’d be happy to provide a free consultation about other unique applications or data analysis questions in your scientific study or grant application if you’re interested (please feel free to fill out our contact form or email us at info@chestnuthillanalytics.com).
Scientists are currently leveraging AI tools and machine learning algorithms, with growth in these areas predicted to accelerate.
5. The Bottom Line: AI and Machine Learning Will Continue to Improve in 2026, 2027 and Beyond, While Growing in Popularity in Business and Science
As technology continues to undergo innovation and knowledge about AI and machine learning becomes more widespread in 2026, 2027 and beyond, we predict that we’ll see further growth and popularity in these areas in both business and scientific research.
Many concerns exist like AI hallucinations and the potential to replace jobs taken by humans, so it is imperative that AI ethics and rules advance alongside these incredibly powerful technologies.
Thanks so much for reading this Knowledgebase article and please feel free to leave a comment below or send us an email at info@chestnuthillanalytics.com with your thoughts!
Here at Chestnut Hill Analytics Insights, our experts have extensive real-world experience with data science software programs and providing guidance about AI tools.
Whether it’s a business expansion plan or scientific research study, our consultants are here to help you make sense of complex data of all varieties and provide informative visualizations, actionable steps, and easy-to-understand recommendations.
We offer a large portfolio of data science, analytics and consulting services, which we describe in more detail on our Services page:
As AI and machine learning continue to evolve, we’re happy to be integrated with the Boston business and scientific ecosystem.
If you’re interested in our company’s story and what we offer, please check out this page on our site.
To learn more about our experts here at Chestnut Hill Analytics Insights, take a look at our Team page.
Please feel free to send us an email, shoot us a message on Instagram, LinkedIn or Facebook, or fill out our contact form to learn more or schedule a free consultation today!