What is AI.tivist.com?
AI.tivist is my platform to inspire, educate, and support marketing professionals who want to work smarter with AI – without losing their human touch. It’s a blend of consultancy, content, and creative rebellion.
What is an AI.tivist?
An AI.tivist is someone who uses AI consciously – not just to optimize, but to humanize. It’s a mindset. A commitment to using technology with empathy, ethics, purpose and imagination.
Who is Luis Cardoso?
I’m Luis Cardoso, a Portuguese marketing strategist and creative consultant based in Munich. With over 25 years of experience in digital innovation, I’ve helped companies in Pharma, Automotive, SaaS, FMCG, and IT create campaigns that matter. You can see my full profile on LinkedIn.
Who is Mr. AI.tivist?
Mr. AI.tivist is my alter ego – a rebel digital troublemaker in the AI era with a red clown nose, sharp mind and a big heart. He is provocative, questions trends, challenges corporate BS, and believes in using AI with empathy, not just efficiency. And why the red nose? Because it reminds us not to take ourselves too seriously – even in AI. Behind every algorithm is a human. And behind every human, hopefully, a bit of humor.
About AI.tivist
What is AI marketing?
AI Marketing is the use of artificial intelligence to automate, personalize, and optimize marketing processes. From predictive analytics to content generation – AI helps marketers do more with less.
AI terms
What is an API?
An Application Programming Interface – a bridge that lets two programs talk to each other. For example, connecting ChatGPT with your CRM.
What does GPT mean?
Generative Pretrained Transformer. The architecture behind ChatGPT. It processes and generates language by predicting the next word – at lightning speed. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product.
What’s the difference between generative AI and predictive AI?
Generative AI creates new content (text, images, code). Predictive AI analyzes data to forecast behavior (like customer churn or sales). Both are powerful – and even better when combined.
What is prompt engineering?
The art of writing smart inputs for AI to get better outputs. Like asking a genie the right question.
Which industries benefit most from AI marketing?
Pharma, Automotive, Tourism, SaaS, Finance – any industry with customer data and content needs. But especially regulated or competitive markets where smarter marketing makes the difference.
What are LLM (Large Language Models)?
LLMs are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. LLMs are deep neural networks made of billions of numerical parameters that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.
What is AGI (Artificial Generative Intelligence)?
AGI is currently still a gray-zone term. It generally refers to AI that’s more capable than the average human at many, if not most, tasks. It is a highly autonomous system that outperforms humans at most economically valuable work, or according to Google DeepMind’s understanding, it is an AI that’s at least as capable as humans at most cognitive tasks. Concept still under construction.
What is an AI Agent?
An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multi step tasks.
What is Chain of Thought?
In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional LLMs and optimized for chain-of-thought thinking thanks to reinforcement learning.
What is Diffusion?
Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — for example, photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise.
What is Deep Learning?
A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more).
What is Distillation?
Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4.
What is Inference?
Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data.
Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.
What is Training?
Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal.
It’s important to note that not all AI requires training. For example, linear chatbots don’t need to undergo training. However, such AI systems are likely to be more constrained than (well-trained) self-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards.
What are Weights?
Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output. Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.
Why do LLMs sometimes hallucinate?
Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). This is why most GenAI users should verify AI-generated answers. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve, as there is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. And this is OK. But hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks.
What is a Neural Network?
A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier versions — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.
What is Transfer Learning?
A technique where a previously trained AI model is used as the starting point for developing a new model for a different, but typically related task – allowing knowledge gained in previous training cycles to be reapplied.
Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus.
Tools & Methods
Which AI tools do you recommend?
AI is a very fast moving technology. Every week new tools and updates come up, so I have to be constantly testing according to the needs. Currently, for writing: ChatGPT, Jasper. For design: Midjourney, Canva AI. For analysis: ChatGPT + Code Interpreter, Tableau. For automation: Make.com, Zapier.
What is your consulting process?
Listen first. Search. Identify problem. Flag quick wins. Co-work with client teams to build a sustainable action plan. Activate network partners to deliver all necessary support to the action. Deliver and execute with distinction. Report, support and improve. Learn and move on.
Do you offer training or workshops?
Yes – in-house or remote. From 90-minute crash courses for individuals or small teams to full-day creative labs. Always hands-on, always fun.
Do you also work with startups?
Absolutely. I’ve helped founders set up AI-powered funnels, email campaigns, and pitch decks – often with very limited budgets.
Why makes AI.tivist unique?
Because I like to work with departments or small teams within a company. In a co-working, gradual step-by-step approach, we deep-dive into processes, problems and suggestions on how to improve performance and overcome obstacles with the help of the right AI tools. We build together, we test, we evaluate, we improve. It is also an important mixed process of self-analysis, compliance to corporate vision and self-motivation to outperform.