Skip to main content

Problem-oriented policing: matching the science to the art

Abstract

This paper is an edited version of the Jerry Lee Lecture delivered at the Stockholm Criminology Symposium in 2018, the year in which Professor Herman Goldstein was awarded the Stockholm Prize in Criminology in recognition of his contribution to public safety through the development of problem-oriented policing. This paper examines the significance of a problem-oriented approach and seeks to establish the right balance among, and appropriate role for, a broad range of diverse contributions that scholars and analysts can make to support effective problem-solving. It explores the distinctive contributions of experimental criminology and program evaluation to problem-oriented work, and contrasts the inquiry techniques typically employed by social scientists and by natural scientists. The goal of this paper is to usefully “round out” the role that scholars are prepared to play in advancing effective problem-solving practice.

Background

It is an honor for me to be invited to deliver the Jerry Lee lectureFootnote 1, and it is a privilege for me to be part of Herman Goldstein’s celebration. I have never attended a criminology conference before, so I will tell you a couple of reasons why I feel something of an outsider in this setting.

First, I’m not a habitual user of the preferred tools of social science or of criminology. In 30 years of academic life I have never once conducted a randomized controlled trial nor run a regression analysis, even though I have taught students at the Kennedy School when and how to use them. I’ve never worked on a Campbell Collaboration study. So, recognizing that I am in the company of expert and dedicated criminologists, I guess I should pause for a moment and allow those of you who want to leave to do so, because you have concluded that I am an academic slouch, analytically incompetent. But hold on just a minute. I am a pure mathematician by training, with a double first from Trinity College Cambridge. I have a Ph.D. in applied mathematics, specifically pattern recognition, which I earned in 9 months. I invented the topological approach to fingerprint matching, which not only earned me a Ph.D. but also six patents. So, I don’t regard myself as an analytic slouch. But it is true that I am not at all immersed in the habits or norms or conventional analytic toolkits of any particular discipline, which I prefer to think of as a strength, not a weakness.

The second thing that makes me feel somewhat out of place at this symposium is that policing now only takes up about 5% of my research and teaching effort, not 100%. When I left the British police in 1988 to join the Kennedy School at Harvard, my research efforts closely focused on police strategy and development. But I soon discovered that the police profession does not move terribly fast, and that it takes any serious idea about 10 years to be worked through and incorporated into practice. The police profession at the time already had two very substantial ideas it was grappling with: problem-oriented policing and community policing. They certainly hadn’t finished with those in 1988. Thirty years later I regret to say I believe they still haven’t.

With my policing background I was accustomed to being busy, and my phone at Harvard wasn’t ringing, so I took on some other projects. I worked with the Environmental Protection Agency in Washington on a project they called “intelligent use of environmental data”. I did a project with the Internal Revenue Service on an initiative they called “compliance management”. The fact that “compliance management” would appear as an initiative in a tax setting might surprise you. What they actually meant by that phrase at that time was adopting a problem-solving approach—even though they didn’t use that language—identifying patterns of tax non-compliance and developing the organizational capability to address each one in a tailored and intelligent way.

Working with these two other professions I realized before long I was seeing precisely the same set of aspirations and organizational dilemmas that the police were wrestling with under their rubric of problem-oriented and community policing. So, in 1994, I wrote a book—Imposing Duties: Government’s Changing Approach to Compliance—describing the parallels between just these three professions: environmental protection, tax, and policing (Sparrow 1994). After working with a broader array of regulatory and enforcement agencies, I wrote a follow up book in 2000, The Regulatory Craft: Controlling Risks, Solving Problems & Managing Compliance (Sparrow 2000). It had become obvious by then that there was a set of truths applicable across a broad range of social regulatory fields. These truths had been articulated for the police profession very clearly in Herman Goldstein’s work, but had not been articulated anything like so clearly in almost any of these other professions.

I feel, looking back, that much of my 30 years work in academia has been to take Herman Goldstein’s very clear articulation of the problem-solving approach way beyond the confines of the police profession. So I’m going to talk today from the perspective of a very broad range of application.

How broad might the application of these ideas be? The very first paragraph of my book The Character of Harms quotes the Millennium Declaration of the United Nations (Sparrow 2008). Imagine this: the human race gets together and figures out what is the unfinished work for the next century! What they did is produce a list of harms not sufficiently controlled around the world: hunger, war, genocide, weapons of mass destruction, international terrorism, the “world drug problem”, trans-national crime, smuggling of human beings, money laundering, illicit traffic in small arms and light weapons, anti-personnel mines, extreme poverty, child mortality, HIV/Aids, malaria, other emerging infectious diseases, natural and man-made disasters, violence and discrimination against women, involvement of children in armed conflict, the sale of children, child prostitution, child pornography, and loss of the world’s environmental resources.

An overwhelming list: what I see when I read this list is problems to be solved. In fact, each of these is not a problem, but a huge class of problems. The authors of the Millennium Declaration chose to express this challenge as ‘bads’ to be controlled. All of this work is of the same basic type: the identification and control of risks or harms. Many other major policy challenges can be naturally labeled and described in similar terms. Societies seek in turn to reduce violence and crime, pollution, fraud, occupational hazards, transportation hazards, corruption, many forms of discrimination, product-safety risks, and so on.

Do we have a language for this type of work? Is there an established academic discipline that covers it? Do we understand what is the core set of professional skills that all of these endeavors would draw upon? I’m not sure that we do.

Untying knots

The front cover of The Character of Harms has a picture of a knot on it (see Fig. 1). I’m particularly proud of this photograph, because I took it myself. What’s special is what happens when you’re confronted with this image. I know what your brain is doing, because you can’t help yourself. You are naturally drawn into it and you start by conducting stage one epidemiological analysis of this object, assuming it’s a bad thing to be undone. That means figuring out the structure: the way it is. Or the way it works. Its components, structure and dynamics. Many people move on immediately to stage two epidemiological analysis, which involves figuring out the weaknesses of the thing itself. If I asked you to untie it, you’d need to know its vulnerabilities. Which strand would give way first, or most easily? What’s your plan for unravelling it? You might carry out a little experimentation along the way, but if you had really understood the structure and figured out the vulnerabilities of the risk enterprise itself (that’s the analogy) then your analysis of the structure of the thing leads you to invent a tailor-made solution for undoing it. This is a simple physical analogy for problem solving. I wanted an image that would trigger all those mental processes. This knot is about the right level of complexity to grab your attention.

Fig. 1
figure 1

Undoing knots

Now, when we move from the field of individual cognition to organizational behavior, things get more complicated (see Fig. 2). This is the chart that I’ve been using since 1998 just to illustrate the important distinction between problem centered approaches and program centered approaches. There are some big and general classes of things out there in the world (bottom right quadrant) that we should worry about. It might be international trafficking in nuclear materials, or in women and children, or in drugs. Or it could be violent crime or political corruption or environmental pollution.

Fig. 2
figure 2

Two modes of organizational behavior

Once society becomes sufficiently perturbed by the class of risk, we invent a new government agency, or control operation, which needs a strategy. Before long a central idea emerges: a “general theory” of operations. This is how, in general, we will deal with this broad class of problems, and we build big machines. The machines (which sit in the top right quadrant) are programmatic. They tend to be either functional or process based. Those are two quite different ideas in organizational theory. We know about the value and efficiencies of functional specialization; and the use of specialist enclaves as incubators for specialist knowledge and skills. Those lessons date back to the industrial revolution. Then, in the last 35 years, we learned the importance of managing processes. These are high-volume, repetitive, transactional; and frequently cut across multiple functions. The public sector learned process management from the private sector, with a lag of roughly 5 years. But by now we’ve mastered it. Process management and process engineering methods are commonly applied when we set up or want to improve systems for emergency response, tax returns processing, handling consumer complaints. Because such public tasks are important, high-volume and repetitive, it’s worth engineering and automating them, using triage and protocols to ensure accuracy, timeliness, and efficiency.

In the top-right hand quadrant we therefore have major programs—organized either around functions or processes. But large bureaucracies have then to divide up the work and hand it out, often across multiple regions. So, front-line delivery by major regulatory bureaucracies is ultimately performed by functional units and process operations disaggregated to the regional level (top left quadrant). That’s the quadrant where the rubber hits the road. That’s all good, and that’s the way major agencies have been organized for decades.

This audience knows the various general theories that have come and gone in policing over time. The “professional era” of policing relied on rapid response to calls for service, coupled with detectives investigating reported crime. That was the “general theory” of police operations for a good long time, until of course it eventually began to break down.

Environmental protection has had its own “general theories” too. If you assume most pollution comes from industrial plants, then the general theory of operations is to issue permits to industrial facilities (to allow them to operate) and attach conditions to their permits governing various discharges—smokestacks, water pipe discharges, transportation of hazardous waste. Environmental agencies then monitor compliance with the conditions on the permits. That general theory works fine for many pollution problems, but then environmental issues appear that have nothing to do with local industrial facilities: radon in the homes, sick office buildings, airborne deposition of mercury originating from Mexican power plants, importation of exotic species. These problems don’t fit that general model.

At that point regulatory organizations realize that their big programmatic engines cover many things but not all things. Eventually an alternate operational method emerges, which depends neither on general theories nor on major programs. Examining the general class of risks, an agency realizes “this isn’t one problem, but at least 57 varieties”. Then begins the work of disaggregating risks, focusing on specific problems (bottom left quadrant), studying their particular structures and dynamics, leading to the possibility of tackling them one by one. Spotting knots, if you like, and then unpicking them. When an agency operates that way, they invariably end up inventing tailored interventions for carefully identified harms.

I drew Fig. 2 in 1998 for inclusion in my book The Regulatory Craft. It was clear at the time that many celebrated innovations in public service involved organizations designing and implementing tailor-made interventions to carefully identified problems. Other innovations emerging involved the disaggregation task itself—where the big arrow sits in the middle of the bottom row on the chart. The type of work happening here involves analytic systems, data mining systems, anomaly detection systems, sometimes intelligence systems, learning from abroad, imagining problems that you’ve never seen, and the ability to spot emerging problems quickly. I call these, as a general class, “vigilance mechanisms”—the methods you use to discover problems or potential problems that you might not have known about if you hadn’t deliberately looked for them.

The point I want to make with this chart is simply that these two methods, program centric and problem centric, are quite different. Program-centric work is extremely well established and very formally managed. Problem-centric work in most professions (including the police profession) is relatively new, in many cases quite immature, and often not formally managed at all.

The nature of work, and working methods, are different in the top right quadrant from the bottom left. Figure 3 illustrates the types of task statements that might appear in the program-centric space. I’ve picked a miscellaneous collection of tasks from different fields. These task definitions are very specific about the preferred programmatic approach: negotiated rulemaking, three-strikes policies, drug-awareness resistance programs, etc. But they are somewhat vague about the specific problem being addressed, or the range of problems for which that program might be relevant. That’s characteristic of the way work is organized in the top right-hand corner.

Fig. 3
figure 3

Program-centric task statements

Figure 4 illustrates task statements that would fit in the problem-centric space (bottom left). In selecting these I have disciplined myself to use roughly the same number of words and all the same domains. These statements are much more precise about the specific problem to be addressed. They each say nothing (yet) about a preferred solution, because stage one epidemiological analysis seeks first to accurately describe the problem and figure out how it works, before considering plausible solutions. These are each draft, summary, problem-statements, taken from different fields.

Fig. 4
figure 4

Problem-centric task statements

The place for program evaluation techniques

Question: where do program evaluation techniques belong more naturally? With the problem-centric work, or the program-centric work? The basic, if crude, distinction between program-centric work and problem-centric work has obvious consequences for the type of analytic support required. If you operate a specific program—especially if it is a big, permanent, and expensive one—then it would be professionally irresponsible not to have it evaluated. Program-evaluation techniques fit very naturally in the program-centric space.

But the discussion I have raised in some of my papers, and my recent criticisms of the evidence-based policing movement revolve around this question: how well do the techniques of program evaluation fit in the problem-centric space? For sure, they are not entirely irrelevant. But they are not so obviously relevant as they are in the program-centric space. And I notice an awful lot of other analytic work going on, and a range of different scientific methods involved, once you choose to operate in the problem-centric space.

One of the odd things that I have noticed through working across so many different regulatory disciplines is that some agencies are inherently more scientific than others. I’m talking natural sciences, not social sciences. Environmental protection agencies, public health agencies, safety regulators in civil aviation or in the nuclear power industry: these agencies are thickly populated with advanced degrees in engineering, physics, chemistry, biology or some other science. In these areas they do not talk about “evidence-based policy”. The subject just doesn’t come up! They scarcely ever conduct randomized controlled trials. For those of you who plan to fly home from this symposium, I’m pleased to report that aviation regulators do not use randomized controlled trials in their efforts to ensure and enhance airline safety. For sure they do a lot of experimentation in laboratories, to test components; in wind tunnels, to test designs; in simulators, to test the training of pilots and crews.

These more scientifically oriented professions use natural science investigation techniques rather than social science investigations because they are intensely focused on the connecting mechanism. (Nagel 1961) They study how the thing works, how a risk unfolds. They focus more on “how does it work?” (problem-definition) than “what works?” (program-evaluation). (Lindblom 1990) If your watch is broken and you open it up and find a spring broken, you replace it and close the watch and it works again: then you know you fixed it, because you have had a very clear view inside the mechanism, so you can see precisely how your intervention took its effect. You don’t need fifty watches or a control sample to know you fixed it.

Regulatory agencies vary in the degree to which they get inside mechanisms and therefore relying on the natural sciences and engineering, as opposed to observing, from a distance, effects over time and then using sophisticated statistical techniques to determine whether this caused that. The sciences are just different. It’s a fascinating thing to observe.

I do believe there is a considerable awkwardness if one tries to push program-evaluation techniques into the problem-centric space. Here are eight ways in which the use of program evaluation techniques might turn out to be quite awkward in the problem-solving space. (These arguments are more fully explored in Handcuffed, Sparrow 2016, Chapter 4).

  1. 1.

    Establishing “what works” is too slow for operational work To build the body of evidence to support the claim that a specific program “works” might take at least 3–5 years. A lot of problem-solving work is conducted on a shorter timeframe than that. Nimbleness and speed matter, and the ability to move on quickly to the next problem.

  2. 2.

    A “what works” focus may narrow the range of solutions available If scholars attempt to discipline police by saying they must only use what works (Sherman 1998), police might infer that they can’t try anything new, which would be tragic in this context.

  3. 3.

    Social science focuses on subtle effects at high levels; problem-solving focuses on more obvious effects at lower levels Asking if a broadly implemented program affects overall reported crime rates in a geographic area is quite different from asking “did we solve the problem of high school kids committing burglaries on the way home in the late afternoon?”

  4. 4.

    Demand for high quality experimentation may (ironically) reduce practitioners’ willingness to experiment Scholars’ insistence on high quality experimental protocols might make practitioners reluctant to do cruder or cheaper experiments, or to engage in the kind of iterative and exploratory development that characterizes problem-solving. (Moore 1995; Eck 2002; Moore 2006).

  5. 5.

    Focusing on program-evaluation may perpetuate the “program-centric” mindset, at a time when one is seeking to develop and enhance “problem-centric” capabilities. Focusing first on problems, rather than on programs, is a fundamentally different approach, and leads to entirely different organizational behaviors and operational methods.

  6. 6.

    Focusing on statistically significant crime reductions may not recognize or reward the best problem-solving performance The best performance, in a risk control setting, means spotting emerging problems early and suppressing them before they do much harm. The earlier the spotting the less significant (in a statistical sense) would be the resulting reductions. The very best risk-control performance, therefore, would fail to produce substantial reductions, and might not therefore be visible under the lenses of standard statistical inference.

  7. 7.

    Program evaluations focus on establishing causality; problem-solving focuses on reducing harms and then moving on quickly.

  8. 8.

    Evaluations focus on particular interventions, rather than on the broader problem-solving capabilities of an organization Even unsuccessful attempts to solve specific problems can provide valuable lessons and experience, enrich partnerships with community groups and other agencies of government, and build officers’ confidence and capabilities. For problem-solving to mature, attention must be paid to all these other forms of investment and progress.

This constitutes what I believe is a continuing and healthy debate. (Pawson and Tilley 1997; Black 2001; Paquet 2009) But notice that it is, curiously, a supply-side debate; almost “discipline-centric” in fact. The question mostly debated is “how well do the social-science methods of program-evaluation fit in this (problem-solving) space?” We should ask the demand-side question instead: “what kinds of analytic support might be required to properly support problem-solving?”

Defining analytic support for problem-solving

In my executive programs for senior regulators, I ask the class to imagine that all their functional programs are perfect—their detective units, their auditors, their inspection division—they are all expert, state of their art, competent, professional and efficient. And to imagine too that all their major processes are sweetly oiled engines, beautifully designed and handling transaction loads in a timely, accurate, and cost-efficient way. In other words, everything in the top right—major programs—is working perfectly. Then I ask them to identify the types of risk that nevertheless might not be well controlled. I let them ponder that. It doesn’t take them long to come up with a list. Here are the categories they most commonly nominate:

  • catastrophic risks things that don’t normally happen (or maybe have never happened yet), and which therefore are not represented in the normal workload.

  • emerging risks that were not known when the major programs were designed. These often involve technological innovation within regulated industries, which are therefore not covered by established programs.

  • invisible risks, where discovery rates are significantly below 100%. These are very well understood in the criminological literature: consensual crimes, white-collar crimes, crimes within the family. Issues with sufficiently low discovery or reporting rates that we don’t know the true scope, scale, or concentrations of the problem. The problem itself might be totally invisible. In less severe instances, available data might only show partial or biased views of the problem.

  • risks involving conscious adversaries or adaptive opponents, who deliberately circumnavigate controls and respond intelligently to defeat control interventions. Some of these opponents are technically sophisticated and clever: terrorists, thieves, hackers, cyber-criminals.

  • boundary spanning risks If the responsibility for controlling a risk (for example, juvenile delinquency) sits across several major public agencies, then it is inconceivable that a specific program owned by just one of those agencies could constitute an adequate solution. Such problems demand a collaborative approach to planning and action.

  • persistent risks We keep on seeing cases of a particular type, and maybe we handle each case perfectly—but the volume of cases stays high so we’re obviously not controlling the underlying problem. Now it’s time to think differently, and to move from a high-volume reactive process (which sits in the top right quadrant) and develop some thinking in the bottom left, to get to the underlying causes and factors, and address the pattern in a more systematic fashion.

I’m sure there are many other categories of risks one could add to this list, but these are the ones most commonly put forward as compelling reasons for acknowledging the need to develop a problem-oriented capability.

Each of these categories presents major demands for analytic and scholarly support—and not usually program evaluation, at least not upfront. Each category is peculiar and different in the kinds of analytic support required.

For catastrophic risk, there’s the work of imagining things that haven’t happened before, but could. There’s the work of collating experience from abroad, bringing home and learning from everyone else’s misery anywhere on earth, testing our readiness, re-assessing the probabilities, figuring out whether we should worry at all, and if so how much and in what way. Also, we should deliberately exploit near misses, which present major opportunities for cross agency learning exercises, and review of contingency plans. Such work needs to be recognized, set up, and organized.

Effective control of emerging risks puts a premium on anomaly detection. Spotting them early requires a range of pattern recognition methods to detect departures from normal loads and distributions, and then methods of inquiry to find out what the anomaly represents. We should also deliberately hunt for emerging risks based on the experience of other jurisdictions. That means first gathering up that experience from far afield, figuring out the algorithms that would detect those phenomena if present, then applying them and investigating what they reveal. This is a heavily analytic endeavor.

Agencies dealing with invisible risks are plagued by what I call the macro level circularity trap of underinvestment. Nobody really knows how extensive the problem is, so the controlling agency cannot build the case for much budget. Because they can’t spend much, they can’t spend much on discovery, so they don’t discover much. This reinforces the notion that the problem might not be so bad. That’s the circularity trap. I’ve already gone around the whole loop once. There’s a scholarly job to be done in breaking open that circularity trap by measuring the scale of the issue and putting some reliable facts and figures, or at least valid estimates, on the table.

With invisible risks, all the readily available metrics—the number of cases detected or reported—are ambiguous. Such metrics are the product of the prevalence multiplied by the discovery rate, and both are unknown. Therefore, whenever that product moves up or down one never knows which component changed, and whether that’s good news or bad news. It’s a scholarly job to decouple the prevalence from the discovery rates, in most settings by designing and carrying out systematic measurement programs. If, in certain circumstances, it’s not practical or possible to measure prevalence through random sampling, then one can measure the discovery rate instead, sometimes by testing discovery mechanisms deliberately, and sometimes through artful and creative cross matching using independent data sources.

Controlling risks that involve conscious opponents, or adaptive opposition, demands the use of sophisticated intelligence methods. To find out what opponents are thinking, agencies will use surveillance techniques, develop informants, make trades with convicted felons for information about what the industry is planning. And if we can’t figure out what they are actually thinking, then we should work out what they should be thinking. Pay honest people good money for dishonest thinking. What would we be thinking if we were in their shoes. For this work to get done, somebody needs to convene the meeting call, devote the time, and invest in the business of imagining what should be in the opponents’ heads. It’s a very specific type of work that pertains only to this class of risks.

What about boundary spanning risks? Many of the presentations at the Stockholm Symposium have described issues that were not wholly owned by the police. The scholars involved in these problem-solving projects played an impressive and vital role in holding together multi-agency coalitions and binding them to an analytically rigorous process. Such groups often depend on somebody with academic standing and credibility who can establish procedural discipline and appropriate levels of analytic rigor.

The analysis of persistent risks typically involves cluster analysis of available (incident based) data. In the project presentations at this symposium we’ve seen many fascinating cases of such analytic work being done to establish the natural dimensionality, scale, and features of various crime patterns.

You can see, I trust, that I’m not opposed to analysis. In fact, I’m always stressing the importance of analysis in the problem-solving arena. My rule-of-thumb is that for any problem-oriented project an agency launches, probably 20% of the effort required will be analytical. Of course, it should scale with the size and complexity of the problem, but roughly 20% seems a useful guide. Sadly, that usually exceeds the analytic capacity agencies have available.

Problem-solving protocols and managerial infrastructure

To be sustainable, problem-solving needs to be formally managed. The necessary managerial machinery has two components, and clarifying those helps us figure out the necessary academic support. One is the protocol through which any problem-solving project progresses, and the other is the background managerial infrastructure that runs the whole system.

My impression regarding implementation within the police profession is that we’ve talked a lot about the protocols for projects [including the SARA model (Eck and Spelman 1987)]; but we’ve talked much less about the background managerial infrastructure.

Figure 5 shows my standard problem-solving protocol, or project template. It shows, I believe, the absolutely minimal level of granularity. You’re all familiar with the SARA model. I’ve had to adjust this a little bit based on experience with other regulatory agencies. One reason this differs from SARA is that I’ve deliberately separated stage one—problem nomination—from everything that follows. It’s not wise to assume that the person who nominates the problem should necessarily lead or participate in a project. It will be your best and brightest staff offering things up, and if they discover that every time they do that it makes more work for them, they will stop doing it! The assumption that nominators become champions works for a very short period. For the method to be sustainable, staffing decisions must go back up through the HR system and a formal process for establishing project teams, selecting the right people, and balancing their workloads at the same time.

Fig. 5
figure 5

Problem-solving protocol

I also emphasize the need to be able to close projects—stage 6—because often projects are launched with enthusiasm in the public sector, but then eventually they just peter out, or they stall because the team is dysfunctional, or we just forget about them when other priorities come up. Projects should be formally closed: either because they’re hopeless and you had better spend your scarce resources on something more promising; or because they’ve succeeded enough so that this risk, at this residual level, is now no longer the priority for special attention; or because a crisis emerges and the project must be shelved for a while, because it’s all hands on deck to deal with the crisis, after which you need an orderly process for resumption.

I also have learned to stress the long-term monitoring and maintenance that goes with project closure. We’ve seen several projects described this week where initial success brought an encouraging period with low incident levels, but then later the volume creeps up and we’re not sure why. In some cases, the project managers did not have the data or analysis to explain the subsequent rise. An essential twist in the tail of the problem-solving protocol is the requirement to put in place monitoring and alarms that will alert us quickly if the problem, or some variant of it, should re-emerge.

What is the background managerial infrastructure? I think at the minimum it includes the seven components in Fig. 6. You need a nomination system for problems—a way of canvassing for them and funneling them to a central point so that they can be assessed and compared. You need a selection system that deploys a set of agreed criteria that should be considered by whoever is responsible for prioritization and selection. Most organizations also identify a few criteria that should not be considered, so as to avoid improper political interference or corrupt influence from industry. Then, once problems have been selected, the organization needs a system for assigning people, time, money, and analytic support. Project work should be recorded, and results channeled into the performance reports of the agency. And because staff and managers are being asked to do work of a type that’s unfamiliar to them and makes many of them feel quite insecure, they need support and advice from others more experienced in the problem-solving art.

Fig. 6
figure 6

Managerial infrastucture

Many police agencies don’t have these formal mechanisms. In that case, problem-solving will be limited to a small number of ad-hoc projects driven by self-motivated champions, and lacking any formal support from the organization. In some departments problem-solving hasn’t matured beyond the beat-level version, where projects are conceived, conducted, and concluded by beat officers acting alone, if they are so minded; in which case no problem bigger than beat-sized can be effectively addressed. Substantial problems need substantial problem-solving investments, and a system for managing them.

Back to the question of scholarly support. Which of these components of the managerial infrastructure in Fig. 6 present special opportunities for scholarly intervention? I would say numbers one, two and seven. Finding problems in the first place involves anomaly detection, scanning, gathering intelligence from other jurisdictions and conducting searches to see if a specific problem appears here. Comparative assessment is intensely analytical. Imagine you have fifteen different problem nominations, with an imperfect understanding of their nature and incomplete information about their scale, scope, or concentration. Much analytic work is required to enable managers to make sensible, informed decisions about which ones are more important. And it is often academics or consultants who can play the special role of coach or advisor, working with team leaders, sitting alongside managers as they conduct periodic project reviews, giving practitioners confidence to make the relevant decisions knowing they can change course later if necessary.

If those seven components were all necessities, then there are two additional ones I’d view as luxuries:

  1. 1.

    A reward system to provide recognition for project teams that achieve important results, and

  2. 2.

    A system for learning to provide broader access (within the organization and across the profession) to knowledge acquired: what worked, what didn’t, what resources are available within and outside the agency, contact information, keyword-searchable databases of projects, etc.

Here, I believe the police profession does far better than many others. Several award systems have been implemented around the globe, including the Tilley Award in the U.K. and the annual International Herman Goldstein Problem-Solving Award Program. The police profession has also worked diligently through conferences and research programs—notably including the Problem Oriented Policing Center—to build a knowledge base for the profession, available to all.

Broadening the range of scholarly support

To provide more rounded support for problem-oriented policing, in what direction should we push scholarship, and the nature of scholar/practitioner relationships? I think we need to:

  1. 1.

    Broaden the range of crime analysis and pattern recognition techniques, recognizing the multi-dimensional nature of crime problems (beyond place/time/etc.) and focussing on problems that are emerging, novel or unfamiliar.

  2. 2.

    Develop the interplay between data mining and investigative field craft, to support investigation of complex phenomena. One might develop from the data an idea about what might be happening, but then go and test the hypothesis on the street with undercover shopping, intelligence gathering, or some surveillance. Then come back and filter the data in a different way, adjust the hypothesis; go back and forth between these two modes of inquiry through multiple iterations. Investigation of complex problems seems to require that collaborative style of investigation.

  3. 3.

    Define and refine the supporting role for analysis for each stage of the problem-solving process.

  4. 4.

    Design and deliver quality analytic support throughout every level of the police organizations, because problems are small, medium or large, and a mature problem-solving organization should be able to organize projects at any or all of those levels. Analytic support should go to the scale of the issue.

  5. 5.

    Study intractable problems, with help from academics, where practitioners under the pressure of daily operations might not have the capacity.

  6. 6.

    Help elevate crime analysis and intelligence analysis to the level of a profession, without allowing it to be limited by a narrow focus on program evaluation.

  7. 7.

    Develop a theory of analytic vigilance, to avoid “failures of imagination,” knowing how much to keep looking & how to look, even when there might be nothing to find. These are complicated analytical question about the nature of vigilance. They do not lend themselves to strict mathematical solutions. These challenges are in part political, as much art as science, necessarily subjective to a degree, but our approach to these puzzles needs to be much more analytical than it has been so far.

These are some of the areas where academia could offer so much more.

Conclusions

I hope that in some way my peculiar perspective on these things has been useful to you all. My goal has been to establish the right balance among, and an appropriate role for, a broad range of diverse contributions that scholars and analysts can make to support effective problem-solving. I have contrasted the distinctive contributions of social science program evaluation with the broad range of scientific inquiry modes employed by natural scientists, so as to usefully “round out” the role that scholars are prepared to play in advancing best problem-solving practice.

I hope at least some of you will be prepared to consider a much broader range of scholarly contributions in support of crime control. I also hope you will find it exciting to know that the methodologies you already use, and the modes of interaction between scholars and practitioners that we have begun to develop in the context of crime control—these same skills and methods are applicable across a vast field of social harms, way beyond the field of policing.

Notes

  1. Although this paper is based directly on the 2018 Jerry Lee Lecture, the process of editing for publication necessitated widespread deletions. The paper is best presented as “edited excerpts”: readers wanting the fuller experience are referred to the link below, where they are able to view the entire lecture in video format: https://sites.hks.harvard.edu/fs/msparrow/videos/Stockholm%20Criminology%20Symposium%202018--Malcolm%20Sparrow--the%20Jerry%20Lee%20Lecture--High%20Resolution.mp4

References

  • Black, N. (2001). Evidence based policy: Proceed with care. British Medical Journal, 323, 275–279.

    Article  Google Scholar 

  • Eck, J. E. (2002). Learning from experience in problem-oriented policing and situational prevention: The positive functions of weak evaluations and the negative functions of strong ones. In N. Tilley (Ed.), Evaluation of crime prevention (Vol. 14)., Crime prevention studies book series Monsey: Criminal Justice Press.

    Google Scholar 

  • Eck, J. E., & Spelman, W. (1987). Problem solving: Problem-oriented policing in Newport News. Washington, DC: Police Executive Research Forum.

    Google Scholar 

  • Lindblom, C. E. (1990). Inquiry and change: The troubled attempts to understand and shape society. New Haven: Yale University Press.

    Google Scholar 

  • Moore, M. H. (2006). Improving police through expertise, experience, and experiments. In D. Weisburd & A. Braga (Eds.), Police innovation: Contrasting perspectives. Cambridge: Cambridge University Press.

    Google Scholar 

  • Moore, M. H. (1995). Learning while doing: Linking knowledge to policy in the development of community policing and violence prevention in the United States. In P.-O. H. Wikstrom, R. V. Clarke, et al. (Eds.), Integrating crime prevention strategies: Propensity and opportunity. Stockholm: National Council of Crime Prevention.

    Google Scholar 

  • Nagel, E. (1961). The structure of science. New York: Harcourt, Brace & World.

    Google Scholar 

  • Paquet, G. (2009). Crippling epistemologies and governance failures: A plea for experimentalism. Ottawa: University of Ottawa Press.

    Google Scholar 

  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage Publications.

    Google Scholar 

  • Sherman, L. W. (1998). Ideas in American policing: Evidence-based policing. Washington, DC: Police Foundation.

    Google Scholar 

  • Sparrow, M. K. (1994). Imposing duties: Government’s changing approach to compliance. Westport: Praeger Books.

    Google Scholar 

  • Sparrow, M. K. (2000). The regulatory craft: Controlling risks, solving problems and managing compliance. Washington, DC: Brookings Institution Press.

    Google Scholar 

  • Sparrow, M. K. (2008). The character of harms: Operational challenges in control. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Sparrow, M. K. (2016). Handcuffed: What holds policing back, and the keys to reform. Washington, DC: Brookings Institution Press.

    Google Scholar 

Download references

Authors’ contributions

The author read and approved the final manuscript.

Acknowledgements

Thanks to Gloria Laycock for helping prepare this article.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

Not applicable.

Funding

Not applicable.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Malcolm K. Sparrow.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sparrow, M.K. Problem-oriented policing: matching the science to the art. Crime Sci 7, 14 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s40163-018-0088-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s40163-018-0088-2

Keywords