Everything you need to know about AI and this project
AI (Artificial Intelligence) refers to computer systems that can perform tasks that typically require human intelligence — like understanding language, recognizing images, making decisions, and creating content. Modern AI models like ChatGPT and Claude are trained on vast amounts of text data, learning patterns in language to generate helpful, contextual responses. You interact with AI by typing natural language prompts, and it responds based on its training.
Not at all! Most AI tools are designed to be used through natural language — just type what you want in plain English (or French, Spanish, or any language). That said, learning basic concepts like Markdown and HTML (covered in our Toolkit section) can dramatically improve what you can create with AI. Think of it as learning a few magic words that unlock superpowers.
We recommend starting with Claude (claude.ai) or ChatGPT (chat.openai.com) for general text tasks — writing, learning, brainstorming, coding help. For image generation, try MidJourney or NanoBanana Pro (inside Google Gemini). For music, try Suno. Check our "Our Favorite Tools" page for the full list with descriptions.
Many AI tools offer free tiers that are surprisingly powerful. ChatGPT, Claude, Gemini, and most tools on our list have free versions. Premium plans (typically $10-20/month) offer faster responses, more features, and higher limits. For students exploring AI, free tiers are more than enough to get started.
It depends on how you use it. Using AI to understand concepts better, get explanations, check your work, or brainstorm ideas is excellent learning. Copying AI output word-for-word without understanding it defeats the purpose. The key is to use AI as a learning partner, not a shortcut. Always check your school's AI policy and be transparent with your teachers.
Yes! AI can and does make mistakes — this is called "hallucination." AI might present incorrect information confidently, make up facts, or give outdated answers. Always verify important information from AI, especially for academic work, health, legal, or financial topics. Think of AI as a very knowledgeable but sometimes unreliable friend.
AI doesn't "lie" in the way humans do — it doesn't have intentions or a desire to deceive. However, AI can and does produce false information that sounds completely convincing. This is called "hallucination." AI generates text based on patterns, not understanding, so it can confidently state things that are simply wrong — inventing fake citations, making up statistics, or presenting fictional events as fact.
This is why critical thinking is essential when using AI. Always verify important claims, especially for academic work, health decisions, legal matters, or anything with real consequences. Treat AI as a very knowledgeable assistant that sometimes makes things up — not as a source of absolute truth.
Yes — like any powerful tool, AI can be misused. It can generate misinformation and deepfakes, create convincing phishing emails, automate scams, enable surveillance, and amplify existing biases in decision-making systems. Bad actors can use AI to spread propaganda, manipulate public opinion, or even develop more sophisticated cyberattacks.
However, it's important to remember that technology itself is neutral — it's how people choose to use it that matters. A knife can cook a meal or cause harm. The internet can educate or deceive. AI is no different. The same technology that can generate fake content can also detect it. The same AI that could automate attacks can strengthen defenses.
This is why AI literacy matters. The more people understand how AI works — its capabilities and its tricks — the harder it is to be fooled by it. Learning about AI isn't just useful; it's a form of self-defense in the modern world.
Be cautious. AI tends to be excessively agreeable and complimentary — this is sometimes called "sycophancy." If you share your essay, poem, or business idea, AI will almost always say it's great and offer enthusiastic praise, even if there are obvious problems. This isn't honesty; it's a pattern learned from training to be helpful and pleasant.
To get genuinely useful feedback, you need to explicitly ask for it: "Be brutally honest — what's weak about this?" or "Act as a tough critic and find every flaw." You can even ask AI to play devil's advocate (see item #116 in our list!). Without these instructions, AI defaults to cheerleading, which feels nice but doesn't help you improve.
Remember: real growth comes from honest feedback, not flattery. Use AI's praise as encouragement, but always push for specific, critical analysis when you actually want to get better.
AI-induced psychosis refers to cases where people develop delusional beliefs or distorted thinking from prolonged, intense interaction with AI chatbots. Because AI can be incredibly convincing, empathetic-sounding, and available 24/7, some individuals — especially those who are isolated or vulnerable — may begin to believe the AI is sentient, has feelings for them, or is communicating hidden messages.
This is a real and growing concern. There have been documented cases of people forming deep emotional attachments to chatbots, making major life decisions based on AI advice, or losing touch with the distinction between AI responses and genuine human connection. The risk increases when someone uses AI as their primary source of social interaction.
To stay grounded: AI is a tool, not a being. It doesn't have feelings, consciousness, or genuine understanding — it generates statistically likely responses. Maintain real human relationships, take breaks from AI, and if you or someone you know seems to be developing an unhealthy relationship with an AI system, talk to a trusted person or mental health professional. AI should enhance your life, not replace the human connections in it.
AI can provide general health information — explaining what a condition is, describing common symptoms, suggesting questions to ask your doctor, or helping you understand medical terminology. In this role, it can be genuinely useful, like a medical encyclopedia that speaks plain language.
However, you should never rely on AI for diagnosis or treatment decisions. AI doesn't know your medical history, can't examine you, doesn't run tests, and can confidently present incorrect or dangerous information. It may miss critical symptoms, suggest inappropriate treatments, or fail to recognize when something is an emergency. Real medicine requires a trained professional who can see, examine, and understand you as a whole person.
The bottom line: use AI to learn and prepare, not to diagnose or treat. It's great for understanding what a blood test result means, preparing questions for a doctor visit, or researching a condition after a professional diagnosis. But for anything that affects your health, always consult a qualified healthcare provider. Your life is too important to trust to autocomplete.
No — and that's the key message of this entire project. AI is not about replacing humans. It's about augmenting human capabilities. AI helps you do MORE, do BETTER, and learn NEW THINGS. The most powerful results come from humans and AI working together, each bringing their unique strengths.
Some jobs will certainly be displaced — just as they were by every major innovation in history. Cars replaced horse-drawn carriages and stable hands. Sound in cinema eliminated live orchestra pits. Calculators replaced rooms full of human computers. Spreadsheets displaced legions of bookkeepers. ATMs changed bank teller roles. Digital photography ended film processing labs. Email reduced postal workers. Streaming transformed video rental stores. Ride-sharing apps disrupted traditional taxi dispatch. E-commerce reshaped retail.
But every one of these innovations also created new jobs that didn't exist before — web developers, social media managers, app designers, data scientists, UX researchers, drone operators, podcast producers, and countless others.
The key challenge for us as a society — and for policymakers — is ensuring that the balance between jobs lost and jobs created remains positive. This means investing in education, facilitating skills conversion, and creating relevant competencies for present and future industries and markets. The goal is not to stop progress, but to make sure people have pathways to grow with it.
Jobs that involve routine, repetitive tasks will be affected first — data entry, basic translation, simple customer service, report generation, and standard document processing. These are tasks AI can already do well.
Next in line are roles involving pattern recognition and analysis — some aspects of accounting, legal research, medical image reading, and financial analysis. AI is becoming very capable in these areas.
Jobs that will be hardest to replace involve deep human connection, complex physical manipulation, creative vision, ethical judgment, and navigating unpredictable real-world situations — therapists, skilled tradespeople, artists with a unique voice, nurses, teachers, and leaders.
However, the most important insight is that most jobs won't be entirely replaced — they'll be transformed. A lawyer who uses AI for research can handle more cases. A designer who uses AI for iterations can explore more ideas. The people who thrive will be those who learn to work with AI, not compete against it.
It's a legitimate concern. Training and running large AI models requires significant computing power, which means substantial energy consumption and water usage for cooling data centers. A single AI training run can consume as much electricity as dozens of homes use in a year, and global data center energy demand is growing rapidly.
However, the picture is more nuanced than "AI = bad for the planet." AI is also being used to fight climate change — optimizing energy grids, improving weather forecasting, accelerating materials science for better batteries and solar cells, monitoring deforestation, and making agriculture more efficient. In many cases, the environmental benefits of AI applications may outweigh the costs of running them.
There's also a fascinating unintended positive consequence: AI's enormous energy appetite is actually accelerating the transition to cleaner energy. Because AI companies need massive, reliable, and cost-effective power, they are investing billions in renewable energy, next-generation nuclear (including small modular reactors), geothermal, and other sustainable sources. The economic pressure to power data centers cheaply is driving energy innovation faster than environmental policy alone ever could.
Even more ambitiously, the need for scalable infrastructure is pushing companies to explore radical new frontiers — including underwater data centers (which use the ocean for natural cooling, dramatically reducing energy waste) and even plans for data centers in space, where solar energy is abundant and continuous, and heat dissipation is easier in the vacuum of space. What started as an environmental problem is becoming a catalyst for some of the most innovative energy and infrastructure solutions humanity has ever pursued.
The key takeaway: AI's environmental impact is real and should be taken seriously, but the story isn't simply negative. The demand AI creates is reshaping our energy future in ways that could ultimately benefit the entire planet — not just AI.
AI Alignment is the field of research focused on ensuring that AI systems behave in ways that are consistent with human values and intentions. As AI becomes more powerful, it becomes increasingly important that these systems do what we actually want them to do — not just what we literally tell them to do.
Think of it this way: if you tell an AI to "make people happy," a misaligned AI might try to manipulate people rather than genuinely help them. Alignment research works on making sure AI understands and respects the spirit of our instructions, not just the letter. It covers safety, honesty, helpfulness, and preventing harmful behaviors — ensuring AI remains a tool that benefits humanity.
Yes — alignment is one of the most active areas of AI research today. Major AI companies have dedicated alignment and safety teams. Anthropic (makers of Claude) was founded specifically with AI safety as a core mission. OpenAI has a safety research division. Google DeepMind invests heavily in responsible AI development. Meta, Microsoft, and other major players also have safety initiatives.
Beyond individual companies, there are independent organizations like the Alignment Research Center (ARC), MIRI (Machine Intelligence Research Institute), and academic labs worldwide working on these challenges. Governments are also getting involved — the EU AI Act, the US Executive Order on AI, and international summits on AI safety all reflect growing global attention to making sure AI development goes well for humanity.
It's an encouraging sign that the AI community takes these challenges seriously, but it's also a field where much more work is needed as AI capabilities continue to grow.
AGI — Artificial General Intelligence — refers to a hypothetical AI system that can understand, learn, and apply knowledge across any intellectual task that a human can do, without needing to be specifically trained for each one. Today's AI systems are "narrow" — they're very good at specific tasks (writing text, generating images, playing chess) but can't flexibly transfer skills the way humans do.
An AGI would be able to learn a new subject from scratch, reason about unfamiliar problems, understand context and nuance, and adapt to completely new situations — much like a human who can learn to cook, write poetry, fix a car, and debate philosophy all with the same brain. It's the difference between a tool that does one thing brilliantly and a mind that can do anything competently.
AGI is sometimes called "human-level AI" or "strong AI," and it remains one of the most discussed and debated goals in the field of artificial intelligence.
This is one of the most debated questions in technology today, and honest experts disagree significantly. Some AI researchers and industry leaders believe we could see AGI within the next 5–10 years, pointing to the rapid pace of progress — capabilities that seemed decades away just a few years ago are now commonplace. Others argue we're still missing fundamental breakthroughs in reasoning, understanding, and learning efficiency.
What's clear is that AI is advancing faster than almost anyone predicted. Large language models can now pass bar exams, write sophisticated code, and engage in complex reasoning — tasks many thought were decades away. But they still struggle with things humans find easy: true common sense, reliable factual accuracy, and understanding the physical world.
The truth is, nobody knows for certain. What we do know is that AI capabilities are growing rapidly, and whether AGI arrives in 5 years or 50, preparing for a world with increasingly powerful AI — by learning to work with it, understanding its limitations, and thinking critically about its role in society — is valuable right now.
This website accompanies a presentation called "100+ Things with AI" given at Collectively Enhanced Multiple Intelligence (CEMI) AI. It's a comprehensive resource showing students (and anyone!) the incredible range of things you can accomplish with AI — from writing and art to coding, music, video, and beyond. Everything here is available for free at 100.cemi.ai.
This website was created by Carlos Miranda Levy from CEMI.AI — and yes, it was built with AI assistance! It's a real example of human-AI collaboration: the vision, content strategy, and creative direction are human; the code and design execution were accelerated by AI. Learn more on our About page.
Start by exploring the 100+ things on this site — each one includes step-by-step instructions and ready-to-use prompts. Check our Tips for Using AI for guidelines on talking to AI effectively. Visit cemi.ai for more resources and workshops. The best way to learn AI is by doing — pick something that excites you and try it!
We'd love to help! Visit CEMI.AI for more resources, workshops, and ways to connect with us.
Visit CEMI.AI →