In a pitch process, all agencies bring their best people and ideas – but whether the agency that best fits the briefing ultimately wins the contract often depends not only on creative or technical brilliance but also on how the selection process is conducted. Is it strictly structured, or open to all observations?
A process is considered unstructured if it lacks clearly defined, transparent procedures and evaluation criteria, and if the collection and assessment of information are largely ad hoc and impression-based. This becomes evident when even one (!) of the following criteria is not met:
The last point, in particular, is quite common, which is completely understandable from a human perspective. It is also much easier to simply trust one’s gut feeling. However, in reality, we cannot rely on either our intuition or our perception. This leads to poor decisions, either due to biases (e.g., prejudices or halo effects) or a lack of objectivity.
Structured observation and evaluation in pitch processes is more than just rigid adherence to checklists. It is a systematic, transparent approach that ensures we make decisions as objectively and fairly as possible. In this regard, DIN 33430, which primarily deals with job-related suitability assessments, plays a key role. When applied to pitch processes, it provides us with a guiding framework for planning, execution, and evaluation – and thus serves as a powerful tool to minimize biases in decision-making.
Structured evaluation in pitch processes goes beyond rigid adherence to criteria lists. It’s a systematic, transparent approach ensuring decisions are as objective and fair as possible. The DIN 33430 standard, primarily concerned with job-related aptitude assessments, plays a crucial role here. Applied to pitch processes, it provides a compass for planning, execution, and evaluation, helping avoid biases in judgment.
In this context, structuring refers to the systematic planning and implementation of observation and evaluation processes, including:
By incorporating these elements, evaluations become more transparent and comprehensible for all stakeholders. This not only fosters acceptance of pitch outcomes but also increases validity—ensuring that the competencies and concepts critical to the decision are accurately assessed.
Unstructured or semi-structured processes often allow numerous subjective factors to influence decisions. These include personal sympathies, halo effects (“Excelling in one area implies excellence in others”), or confirmation biases (“I view Team A as competent, so I downplay their mistakes”).
DIN 33430 emphasizes objectivity and reliability, which in pitch processes translate to:
A structured approach thus helps minimize random distortions and enhances result reproducibility. For example, two decision-making panels using structured processes are more likely to reach the same conclusion.
Start by clearly defining the pitch requirements. What are the critical performance characteristics? For an agency pitch, these might include innovation, market potential, and team competence, or perhaps creativity, adherence to budget and schedule, and past projects. This requirement profile forms the foundation for subsequent evaluations.
Set specific, measurable criteria derived from the requirement profile. For instance, creativity might be rated on a scale from 1 (very conventional) to 5 (groundbreaking). It’s crucial that all observers use the same definitions and scales.
Create observation sheets or checklists for each criterion. Specify what to observe for each one. Include clear guidelines alongside open questions to minimize interpretation gaps.
Whether internal decision-makers or external consultants, brief all observers. Explain the importance of specific criteria and how to recognize them. Highlight common perceptual biases. The goal is a shared understanding of professional, objective observation.
Plan the pitch so that all agencies experience the same conditions. Fixed timeframes, standardized question sequences, and protocols help ensure fairness. Equally important: create a focused environment for observation.
After observation, each evaluator records impressions using the predefined framework. Use methods such as averaging, standard deviation, or weighting to derive an overall score from individual evaluations. Address inconsistencies and, in larger teams, consider a moderator to ensure adherence to the structured process.
Document the entire process in writing to enable future review of why a particular agency or team was selected. Provide feedback sessions to allow participants to learn and improve. This documentation can also serve as a reference for future pitches.
Although DIN 33430 was originally developed for professional aptitude diagnostics, its principles of structuring are increasingly applied across various contexts. The need for valid and reliable decision-making spans industries—from traditional marketing pitches to investor rounds and university selection processes.
In an era where transparency and fairness are paramount, systematic approaches seem more relevant than ever. Structured decision-making not only minimizes bias but ensures outcomes that are understandable, credible, and as impartial as possible.
As Malcolm Gladwell might say, adapted to this context:
"structured decisions are not a luxury—they’re the key to unlocking the full potential of our creative minds."
Structured observation and evaluation in pitch processes are more than just a formal framework: they are a methodological approach to achieving objective, fair, and reproducible results. By defining clear criteria, providing training for observers, using standardized evaluation tools, and implementing transparent assessment procedures, we lay the foundation for well-founded decisions. This ensures that the pitch is not a product of chance but a controlled process that highlights what truly matters: the best idea and the most capable team.
Comments, additions, and feedback are welcome in the comments section!