AI Literacy: Communicating with the Machines

Communication Styles, Not Prompt Engineering

It feels like the end of an era.  For the past few years, much of what we’ve called “prompt engineering” has actually been a workaround—a form of structured communication required to compensate for the limitations of early language models. The models couldn’t reason, so users were forced towards meta-thinking, breaking down logical steps and structuring inputs in a way that simulated reasoning.  We were engineering thought processes (1).

DeepSeek showed the world how much AI reasoning has improved by revealing the thinking process and therefore stepping right past the need to craft it ourselves.  Tangentially, this reignited the debate about whether “prompt engineering” is a skill or not. This debate, however, is missing the real evolution that’s happening: the shift from prompt engineering to AI communication styles.

And really, we should drop the “engineering” moniker altogether; it immediately puts us into a framing that is too mechanical.  Smooth and engaging conversationalists are not called “talking engineers”, great chefs are not “recipe engineers” and talented writers are not “sentence engineers”.  None of their skills are innate either, they require thoughtful development.  Just as we now must nurture a different set of communication styles for working with the machines, we don’t need to engineer every interaction.

A New Perspective on AI Literacy

If you’ve been using AI long enough, you’ve probably noticed that different tasks require different ways of talking to the model. There’s no one-size-fits-all way to prompt.  Just as we adapt our everyday communication styles based on context—whether coaching a sports team, mentoring an employee, delivering a lecture, or having a casual kitchen-table conversation—this is one of the next big skills we must also develop: different communication styles for interacting with AI.

Different types of work demand different ways of engaging with AI. When problem-solving, we structure our input to help the AI break down an issue. When seeking strategic advice, we interact more like we would with a consultant, refining ideas through back-and-forth dialogue. Brainstorming calls for a looser, more open-ended approach, while technical work—like generating code or regulatory documentation—requires precise, structured input to avoid ambiguity.

AI Literacy Beyond Awareness

We innately choose the right communication style for a given situation in everyday communications, and now we must begin consciously choosing the right communication or interaction style for the task at hand.  This shifts us to a different plane from just learning tools or mastering a specific syntax for prompting. Further distancing us from the mechanistic “engineering” framing, we must develop an intuitive feel for how to “speak AI.”  We don’t necessarily have to be the most technical, we just have to get better at choosing the right approach at the right time.  This is one aspect of building literacy in communicating with AI.

And just as some people stand out in their verbal literacy—writers, poets, or orators who have mastered the nuances of language—others will stand out in their AI communication literacy. These individuals will intuitively grasp how to communicate effectively with AI in different contexts, making them far more effective than those who simply “use” the tools without understanding how to adapt their interaction style to the model and the task at hand.

Why This Matters

The gap between those who are good at communicating with AI and those who aren’t is going to get bigger. The professionals who learn how to interact fluently with AI will be the ones who get deeper insights, automate more effectively, and stay ahead of the curve. Those who don’t will find themselves using AI in shallow, ineffective ways—getting generic outputs that don’t really help them work smarter.

The people who get the best results won’t just be the ones who use AI the most. They’ll be the ones who know how to switch styles depending on what they need.

Where This All Leads

The way we interact with AI is evolving fast and we’re leaving an era where AI required careful, step-by-step reasoning prompts. If you want to stay ahead, the old-era approach of learning tricks or memorizing templates is diminishing in value, instead, shift your focus towards becoming fluent in the different ways AI can be engaged. The best results won’t come from better prompt formulas. They’ll come from those who know how to think and communicate at a higher level.

The new era has begun.


Appendix 1

Aren’t you glad you’re no longer faced with “engineering” a prompt to get thoughtful results?  I will NEVER have to do this again!  Here is an example of a meta-prompt template I designed to mimic reasoning (called chain of thought):

____________________________________

Instruction:

Please provide a comprehensive, step-by-step analysis of the following problem/question.

Ensure that:

– Each part of your reasoning is clearly explained and justified.

– You support your points with evidence, data, or relevant examples.

– You consider multiple perspectives and stakeholder viewpoints.

– You address any ethical implications related to the topic.

– You conclude with actionable recommendations.

– You reflect on potential limitations or biases in your analysis.

Tone and Style:

[Specify the desired tone and style.]

Context:

[Insert a detailed description of the scenario, problem, or question here.]

Guiding Questions:

1. Key Components and Definitions:

– What are the main elements, terms, or concepts involved in this problem?

– Define any critical terms or concepts for clarity.

2. Underlying Principles and Theories:

– What theories, models, or frameworks are relevant?

– How do these principles apply to the current context?

3. Analysis of Perspectives:

– What are the different viewpoints or positions on this issue?

– How might various stakeholders be affected?

4. Evidence and Examples:

– Provide data, case studies, or examples that illustrate key points.

– How does the evidence support or challenge different perspectives?

5. Ethical Considerations:

– Are there ethical dilemmas or considerations involved?

– How should they influence the analysis or decision-making?

6. Potential Solutions or Approaches:

– What are the possible strategies or solutions to address the problem?

– Evaluate the feasibility and implications of each option.

7. Pros and Cons Evaluation:

– What are the advantages and disadvantages of each solution?

– Consider short-term and long-term impacts.

8. Actionable Recommendations:

– Based on the analysis, what actions should be taken?

– Provide a clear and justified recommendation.

9. Self-Reflection and Limitations:

– Identify any potential biases or assumptions in your analysis.

– Suggest areas for further research or questions that remain unanswered.

Example of Guiding Questions applied to [user’s context]:

Topic: Implementing Remote Work Policies in a Tech Company

1. Key Components and Definitions:

Remote Work: Employees working from locations outside the traditional office.

Productivity Metrics: Measures used to evaluate employee performance.

2. Underlying Principles and Theories:

Organizational Behavior Theories: Impact of work environment on performance.

Technology Adoption Models: Factors influencing the uptake of remote work tools.

3. Analysis of Perspectives:

Employees: May value flexibility and work-life balance.

Management: Concerned about maintaining productivity and collaboration.

Clients: Interested in service continuity and quality.

4. Evidence and Examples:

Data: Studies showing productivity increases in remote work settings.

Case Studies: Companies that successfully implemented remote work policies.

5. Ethical Considerations:

Equity: Ensuring all employees have access to necessary resources.

Privacy: Protecting company and personal data in remote environments.

6. Potential Solutions or Approaches:

Hybrid Model: Combining remote and in-office work.

Fully Remote: Transitioning to a completely remote workforce.

Flexible Scheduling: Allowing employees to choose their work hours.

7. Pros and Cons Evaluation:

Hybrid Model Pros: Balances flexibility with face-to-face interaction.

Hybrid Model Cons: May create complexities in coordination.

– (Repeat for other solutions.)

8. Actionable Recommendations:

Recommendation: Adopt a hybrid model with clear guidelines.

Justification: Balances benefits of remote work while mitigating risks.

9. Self-Reflection and Limitations:

Potential Biases: Preference for solutions seen in successful companies.

Further Research: Assessing long-term impacts on company culture.

Leave a comment