Independent advisory at the intersection of Industrial-Organizational Psychology, Human-System Governance, and emerging AI regulation.
Science-based. Practice-informed. Conflict-free.
The founder of Work Science Consulting (WSC) is an Industrial-Organizational Psychologist with over a decade operating at the intersection of behavioral science and enterprise AI — bringing rare depth across AI governance, responsible technology design, and workforce science, built through years of senior advisory and program leadership across large, complex organizations. WSC is an independent practice, purpose-built to deliver that expertise to organizations that need it without compromise.
Organizations deploying AI in HR and workforce contexts face a growing set of governance, compliance, and change management challenges that most advisory firms are not fully equipped to address. WSC was built specifically for this — providing independent, science-based advisory without vendor affiliations or conflicts of interest.
Designing ethical guardrails and Responsible AI policies aligned with psychological principles and emerging global standards. From internal policy architecture to audit-ready documentation.
RAI · Policy · StandardsHelping organizations prepare for third-party audits of all kinds — from algorithmic bias reviews to regulatory compliance readiness — with a strategic lens toward market positioning, including review and drafting of customer-facing documentation grounded in science-based best practices.
Audit Prep · Compliance · Evidence StandardsHelping organizations drive effective AI adoption, change management, and workforce readiness — drawing on both behavioral science and hands-on experience scaling HR programs at enterprise level.
Change Mgmt · L&D · Workforce ReadinessA four-phase approach WSC follows across all client engagements — from initial screening through final delivery.
All potential engagements begin with a rigorous Professional Independence Review. We evaluate strategic fit and ensure zero overlap with existing professional commitments.
We develop a grounded understanding of your program's current state, strategic objectives, and key gaps — through structured intake, documentation review, and stakeholder alignment.
We apply a blend of I-O Psychology methods and practitioner frameworks to evaluate what is working, what isn't, and what the evidence suggests should change.
We deliver clear, evidence-based recommendations and a practical roadmap your team can act on — along with any supporting materials needed to socialize and implement the work internally.
Free, publicly available diagnostic instruments designed to help organizations analyze how AI systems interact with workforce structures and job task demands. Formal advisory engagements follow a separate, proprietary process.
For organizations wondering which jobs are most affected when an AI system takes over a specific task or process — and whether that impact is additive or substitutive.
For organizations that want to know whether an AI system can actually handle the work before they commit — cutting through vendor claims with an evidence-based reality check.
For organizations planning an AI rollout and needing to understand how team structures, job responsibilities, and headcount will need to change as a result.
For organizations that need a practical training plan for the people who will be responsible for overseeing an AI system day-to-day.
For organizations that want to understand the ongoing human work that responsible AI deployment requires — what needs to be monitored, by whom, and how much effort it takes.
For organizations evaluating an AI product — systematically separating credible vendor claims from marketing language and building the right questions to ask before buying.
Methodological Foundation: These instruments are anchored by the O*NET occupational framework — the U.S. Department of Labor's standardized database of workforce knowledge, skills, abilities, and task structures — providing a scientifically grounded basis for analyzing how AI systems interact with human work. Analytical outputs are further informed by Industrial-Organizational Psychology principles and applied workforce science. Results are generated using fixed-seed parameters to ensure consistency and repeatability across identical inputs. These tools are provided free of charge as a public resource. They are distinct from WSC's proprietary advisory methodology, which applies custom frameworks and practitioner judgment beyond what these instruments provide.
Expert interpretations of workforce dynamics at the intersection of I-O Psychology and AI governance. These are analytical takes — not legal summaries — framed as evidence-informed practitioner perspectives.
AI systems do not create a ceiling for performance; they establish a high-speed floor for routine task processing, requiring humans to pivot toward high-order judgment and system governance.
Successful AI integration isn't measured by speed, but by auditability. If a system's logic cannot be traced back to workforce outcomes and task evidence, it is a Black Box liability, not a strategic asset.
Not all AI is the same — and the distinction matters in HR. Deterministic systems produce consistent, auditable outputs. Non-deterministic systems, like generative AI, cannot. Organizations must know which type they have — because the governance requirements are fundamentally different.
As automated processing scales, the market value of human-in-the-loop oversight increases. The most critical skill is Discrepancy Detection — knowing when a system deviates from ethical or logical norms.
Moving away from AI hype requires a common, evidence-based language. Grounding AI capability claims in established workforce and measurement science creates a defensible baseline for how technology reshapes work.
Organizations assume explainable AI solves governance. It doesn't. You can have a fully transparent system with poor oversight infrastructure — or an opaque model that's rigorously governed. The real work isn't making the AI transparent. It's making the oversight transparent. Who monitors it? How often? What triggers escalation?
Due to existing professional commitments and our strict independence protocol, WSC accepts a limited number of advisory engagements per quarter. All inquiries undergo a multi-stage alignment review before a Discovery Call is scheduled.
Complete the protocol form. Your inquiry enters our Independence Review Queue.
We conduct a professional conflict-of-interest screening against existing commitments. Window: 3–5 business days.
If cleared, you receive an invitation for a 30-minute peer-level Discovery Session with the WSC principal.
WSC operates under a strict Commitment to Objectivity. We do not participate in AI vendor referral programs or commission-based partnerships. All advisory work is conducted independently of any third-party affiliations. Nothing on this site constitutes legal, financial, or regulatory counsel.