Search Results
316 items found for ""
- OUR RESPONSE TO RESPONSE TIMES
When it comes to many vital public services, including police, fire and EMS, one of the primary – and sometimes the only – performance measurement that people use is response time. On the surface this makes a lot of sense. For emergency services, particularly, every moment can spell the difference between a minor incident and a tragedy. To the general public, fast response times are real tangible evidence that they are getting good service. Just ask anyone who has waited for an emergency vehicle when a relative or friend might be undergoing a heart attack. All that said, however, response times are often misunderstood. Sometimes, when they are overemphasized, they can actually lead to emergencies themselves. It’s our guess that most people who read about response times aren’t aware that they can be measured very differently by first responders. According to Lexipol, which provides information and tech solutions to help public safety organizations, there are three different ways that response times are generally measured. They are: · “Turnout time – the elapsed time from when a unit is dispatched until that unit changes their status to ‘responding.’” “Call processing time – the elapsed time from the call being received at the (public safety answering point) to the dispatching of the first unit.” “Travel time – the elapsed time from when a unit begins to respond until its arrival on the scene.” There’s a huge difference between the three – particularly from the point of view of the person who is urgently in need of help. With a shortage of EMS vehicles in many parts of the country, for example, after the 911 call is finished it can take the dispatcher valuable minutes to actually get an ambulance company to respond to the call. Once that happens, the ambulance still needs to arrive at the scene. From the perspective of the person who made the call, the response time might be 23 minutes (from call to help) not eight minutes (for the emergency vehicle to make the trip). If response times are truly to be used as helpful performance measures, we’d argue, that what really matters is the amount of time it takes from hanging up with 911 until help comes knocking on the door (or kicking it down in extreme instances). Other measures don’t really reflect the customer experience. Yet another issue with response times is that they don’t take into account the specific situation – and that can jeopardize safety for others, including the responder. If someone thinks they’ve broken an arm, for example, and calls 911 it probably doesn’t matter much if an ambulance arrives in ten minutes or twenty minutes. But if the call is for a fire or a heart attack then every minute counts. Yet these different scenarios are comingled in publishing response times. And that means that when emergency vehicles are summoned, responders who are being held accountable for their response times are responding to the scene as quickly as is possible – traveling far faster than the speed limit, going through stop signs and so on. No surprise that in 2021 according to the National Safety Council, 198 people “died in crashes involving emergency vehicles. The majority of these deaths were occupants of non-emergency vehicles.” Our recommendation is that response times, wherever possible, should be disaggregated in such a way as to differentiate between life and death emergencies and those that are far less serious in nature. This would not only make the response time measures more useful – it might save other innocent lives along the way. #StateandLocalGovernmentManagement #StateandLocalPerformanceMeasurement #ResponseTime #PoliceResponseTimeManagemen #FireResponseTimeManagement #EMSResponseTime #ResponseTimeManagement #PoliceManagement #PoliceData #FireManagement #FireDepartmentData #EmergencyManagementResponseTime #EMSResponseTime #StateandLocalDataGovernance #NationalSafetyCouncil #ResponseTimePerformanceMeasures #Lexipol #PerformanceMeasurement #PerformanceManagement #B&GReport
- THE TWELVE BIG LIES ABOUT STATE AND LOCAL GOVERNMENT
There are all kinds of variations on the theme of the three big lies that people tell in the normal course of day-to-day life. One of our favorite sets consists of: 1) This is for your own good. 2) It’ll be done by 3:00 3) It must be true; I heard it on the news. In the old days there was another one that was exceedingly popular. It was “The check is in the mail,” but nowadays nobody much sends checks in the mail, so we’d offer a replacement for that one: “The check is being processed.” Those deceits, of course, are generic in nature. But over the years we’ve been collecting a series of mantras about the alleged reality of state and local government that don’t necessarily work in the real world. We’ve heard them from people at all levels of government, sometimes from established authorities and sometimes from people who just pretend they understand the way government works. Here are our top twelve. We’d be interested in hearing additional ones from readers of this B&G Report. Of course, some of the dozen items that follow are valid sometimes. But we’ve heard them repeatedly when ample evidence demonstrates that they’re wide of the mark. By the way, we hesitate to use the word “lies,” here. As that word seems to have become widely open to interpretation these days; and it’s frequently used just to describe something with which the accuser disagrees. So, just to be specific, what follows are explanations about the way things work that are frequently NOT the way things work. And the list is based on both our own experience, and the understanding of states and localities we’ve accumulated over the last thirty years. 1. “We know we are in financially sound shape because we have to pass a balanced budget.” 2. “It’s impossible to fire a public sector employee.” 3. “We’ll solve this problem by setting up a commission. Or a study group.” 4. “Our transparency website means our government is transparent.” 5. “Buying new technology will be the key.” 6. “Merit pay is pay based on merit.” 7. “The key reason we have a huge unfunded liability in our pensions, is that our benefits are too rich.” 8. “You should just look at the general fund in order to analyze our city or state’s financial condition.” 9. “You can always trust our data.” 10. “Government can be run like a business.” 11. “Everything we need to know is on the Internet.” 12. "Once a piece of legislation is passed that means that something is really going to happen.” #StateandLocalGovernmentManagement #StateandLocalHumanResources #StateandLocalBudgeting #StateandLocalGovernmentPerformance #StateandLocalGovernmentTransparency #StateandLocalGovernmentWorkforce #StateandLocalPension #StateandLocalGovernmentTechnology
- WHY THE PHRASE "BEST PRACTICE" MAKES US JITTERY
We have to admit it. More than once we’ve referred to a policy or management approach as a “best practice.” But mostly those words were originally uttered by a source we quoted. Frankly, those two overwhelmingly common words often make us uneasy. There may be cases in which best practices can apply from city to city and state to state. Best budgeting practices, for example – such as those developed by the Government Finance Officers Association – can certainly be useful. It’s an accepted best practice in budgeting, for example, that entities should cover current year expenditures with current year revenues -- not revenues borrowed from the future. Who can argue with that? Outside of budgeting, there are some other areas in which best practices can certainly hold up. And many of them. which may not have held true in the past, are now thankfully self-evident. In human resources, for example, it's certainly a best practice to make every effort to avoid explicit or implicit racism in hiring or recruiting. Or consider the realm of information technology, where no one can deny that sufficient training can be fairly called a best practice. Before we go on, it seems worthwhile for us to provide our own definition of "best practice." Others may disagree, but it's the way the words sound to us -- and we suspect to many others. We believe that the ubiquitous phrase should be used to describe management policies that can be applied pretty much universally. Best practices, we'd argue, should be something like plug and play models that others can pick up and use with a reasonable assurance of success. But that's often not the way the words are used. For example, the latest glittery idea that seems appealing (but has only been proven as worthwhile in a smattering of places) can often be dubbed as best. We see this all over the place. People writing reports for any number of significant organizations will take the study of a handful of cities or states and list approaches they’ve uncovered as “best.” Not to seem cynical, but we've noticed that often the words "best practice" are used in consulting firms to sell their own approaches. For years, it was considered a best practice that states set aside exactly 5% of revenues in their rainy day funds. No more. No less. When we researched the topic, we discovered that precise number emanated from an off-the-cuff comment in a speech given by a leader in one of the ratings agencies. As years have passed, thinking on the topic has grown more sophisticated. The Volcker Alliance, for example, has thrown that 5% figure out the window and encourages states to tie their reserve funding to the volatility of revenues. Here are five reasons we are concerned when a best practice is ballyhooed by a government official. 1) Ideas that work in rural areas often don't apply well to densely populated cities 2) Approaches for homogeneous regions may leave out elements important in places with greater diversity 3) Things that work well in healthy economic times may need to be forgotten in the depths of a recession 4) Changing times generally require new solutions. For example, in the depths of the pandemic it was a best practice not to shake hands. Nowadays, people even hug hello. 5) The label is too often applied before a notion has been properly evaluated and proven to be generally workable. Fortunately, there are a number of alternative phrases that can be somewhat more accurate. We prefer "promising," "leading," or "accepted" practice. None of these reflects a universally, unquestionably, absolutely superior way of doing government business. Ultimately, this is all just a matter of semantics. The fundamental reason we feel as we do about practices being labeled the “best,” is that this phrasing may stand in the way of the evolution of thinking that’s necessary for progress in states and localities. If we know the best way to do something, then why look for a better way? And the search for better functioning government is the core of what we do for a living. #StateandLocalGovernmentManagement #StateandLocalGovernmentPerformance #EvidenceBasedPractices #BestPracticeCynicism #ErroneousBestPracticeLabeling #AvoidingBestPracticeLabels #StateandLocalBudgeting #StateManagement #LocalManagement #PerformanceManagement #EvidenceBasedManagement #EvidenceBasedDecisionMaking #EvidenceBasedDecisionMakingShortcoming #GovernmentConsultantOverreach
- FIGHTING FRAUD: ADVICE FOR SMALLER CITIES
Whatever their size, all state and local governments must contend with the possibility that federal grant money or taxpayer revenues will be siphoned off by perpetrators of fraud. While many of the defenses against fraud are similar, there are some cautions that particularly apply to smaller communities. In an interview that appeared on Route Fifty’s October 11, 2022 “Follow the Money” broadcast, City Manager José Madrigal, talked about the effort to fight fraud in Durango, a city of 19,000 in a rural county in the southwest part of Colorado. We're highlighting here a few of Madrigal’s comments which are particularly germane to the 19,000 cities, towns and villages with fewer than 25,000 people. The following edited comments occur toward the end of the broadcast, slightly after the 15 minute mark. The hazards of personal connections in a smaller community: Madrigal said, “Connection can be a really great thing because, you know, with that small town feel, everybody knows each other. My kids go to school with a lot of my co-workers kids. They sometimes hang out. We’re very well connected and sometime with that personal connection comes a letdown in your guard. “You know the person. Our kids play soccer together and they play basketball together . . .You start building all of these social connections. In places where you may not be personally connected, it’s easier to be a little more suspicious." On learning from larger communities: Madrigal remarked that cities with smaller populations could still model themselves on bigger cities and not view size as a barrier. “Some people who have not been in a bigger city have this shield. ‘Oh, no. Can’t do that. . . They have more resources than we’ll ever have.’ “I think there’s ways where you can scale a lot of those things. I may not have a 30-member accounting department, but I have 15 and I can be able to do some things in a better way. “I think sometimes the bravado of coming from a small town or representing a small town (makes us think) we can’t do it like bigger towns. There’s a lot of processes that are out there that I think you can definitely scale down so as not to be intimidated by the processes of bigger areas. Look at them and say. ‘What can I bring in?’" #Fraud #PublicSectorFraud #Durango #Colorado #JoseMadrigal #FightingFraud #RouteFifty #FollowTheMoney #StateandLocalGovernment #StateandLocalGovernmentFraud #CityandCountyManagement #StateandLocalGovernmentManagement #SmallCityCulture #GovernmentOversight
- GUIDELINES FOR BUILDING A POTENT PERFORMANCE MANAGEMENT SYSTEM
In early 2022, we were honored to join the late Harry Hatry, a pioneer of performance management and a distinguished fellow at the Urban Institute and Batia Katz also of the Urban Institute, in co-authoring one of his last papers, which was titled “Do’s and Don’ts: Tips for Strengthening Your Performance Management Systems.” Hatry, who we'd known for decades, passed away on February 20, 2023 at the age of 92. The paper also included contributions from a formidable group of performance experts. The list at the time included Maria Aristigueta, dean of the Biden School of Public Policy and Administration at the University of Delaware, Don Moynihan, McCourt Chair at the McCourt School of Public Policy at Georgetown University and Kathy Newcomer, professor in the Trachtenberg School of Public Policy and Public Administration at the George Washington University. The paper sums up a great deal of performance measurement and management knowledge that Hatry and others put together over many years. Here are a handful of our favorite items under the category of “Do’s”. They aren’t taken verbatim from the paper – we’ve edited many for length. Collecting Performance Data Do seek input from stakeholders as part of your process for selecting performance measures to track. Stakeholders are likely to include frontline employees (those who serve your program participants); special interest group members; elected officials; and, especially, participants in each relevant demographic group. Do make sure that mission statements focus on expected benefits. What benefits or outcomes are sought from the program, and for whom? What negative outcomes does the program seek to avoid or alleviate? Too often, mission statements identify only how benefits will be achieved without clarifying what benefits are sought. Do include both output and outcome indicators in performance measurement systems. Select the measurements used to track the performance of new programs at an early stage of program development. Defining and sharing these measurements will provide guidance for people in positions of authority in the program about what is expected of them. Analyzing Performance Data Do compare the outcome values broken out (disaggregated) by demographic characteristics (such as by age group, race/ethnicity, gender, educational level, and location.) This is of major importance in identifying service equity issues and identifying different service procedures that would boost the outcomes for different demographic groups. Identify and highlight in performance reports unexpected issues indicated by these breakouts. Do compare the performance values over time. Look for trends that indicate the need for action. Do compare performance indicator actual values with targets that had been set for individual performance indicators. Targets are used both as an accountability tool and to motivate program staff. Presenting Performance Findings Do identify any significant ways in which the data collection has changed over time. Otherwise, users may be misled by year-to-year changes that are not attributable to real-world improvements or declines but simply changes in the way the data have been created. Do clearly define each performance indicator. Both the data collectors and data users should be able to understand what is being measured. For example, fire departments, can measure response time from the moment the call comes in until trucks arrive at the scene, or alternately they can provide the same measure beginning the moment the trucks leave the station. Do tell stories that illustrate data’s meaning and importance. Numbers alone will only communicate effectively to readers who enter a document with curiosity. Real-world anecdotes will engage a far larger audience. Disseminating Performance Findings Do share findings with the press. One common complaint is that performance information only gets attention when negative. That can only be counteracted with a proactive approach. One key to getting attention in the press is to provide information that runs contrary to common assumptions. Do make the latest data on the performance indicators readily accessible to program managers throughout the year. As issues arise during the year, the latest performance data should be available to managers to use in addressing those issues. Do provide summaries and highlights to report audiences after each performance report is produced. Using Performance Findings Do reply to queries about findings, even if they are critical in nature. If it turns out that a query challenges findings in a way that could raise some doubts, it’s worth acknowledging that. Trust and credibility grow when room for doubt is acknowledged. Do periodically review the performance measurement process and update it as needed. Is it tracking the right things? Are the performance data and the data collection procedures producing data of sufficient quality? Is the information of sufficient value to justify a measurement’s added cost? Are the performance findings clearly presented? Has the information gotten to all those who can use the information? Do unleash the full power of performance data, not only through regularly published reports, but also at other opportunities throughout the year. Use the performance measurement process to help address issues as they arise. This will enable decisions to be made with the latest available performance data. It will also enable program managers to obtain performance information tailored more closely to the particular issue at hand.
- MOUSETRAPS FOR FLAWED DATA
It may seem a little heavy-handed, but for years now we’ve been writing about the endless reams of bad data that are used to manage and to make policy. For the most part, we’ve pointed to issues that require careful examination of the information to determine if its trustworthy or not. But, as time has passed, we’ve come across a great many signals, easily spotted and identified, that point to quicker recognition that information should be scrutinized. Here are a half dozen examples: 1) Beware comparisons between absolute figures that come from different size cities or states. If, for example, something criminal happens to hundreds of people in California that may not be nearly as alarming a situation as when the same thing happens to dozens of residents of Wyoming or North Dakota. 2) Sometimes reports or articles use numbers that are so precise as to be unbelievable. It seems to us that when project spending is reported as $1,436,432.15, there’s no legitimate way to figure costs out to cents, dollars or hundreds of dollars. A tight range is often more useful and believable. 3) Speaking of ranges, it’s self-evidently problematic when an expense is reported as somewhere between $100 and $500 million. Either not enough due diligence has been done, or the estimators are living in the Land of the Wild Guess. 4) If you’re relying on data for which no assumptions are provided dig deeper. When discount rates vary between two state pension plans, it’s entirely possible that the liability figures are not comparable. 5) Watch out for figures that are huge beyond common sense. Some years ago, there was a lot of talk about one million children being abducted each year. Living in New York City, news reports were full of the story of just one little boy, Etan Patz who was last seen at a bus stop in lower Manhattan. How could it be that if such huge numbers of children were disappearing, one child was getting so very much attention? It turned out, according to the Denver Post in 1986 that the “national paranoia" raised by the one million figure wasn’t really reflective of scary men luring children into their cars with candy – but rather children taken in a custody battle. (And the often repeated one million figure was also an exaggeration. In 2017, the Justice Department reported that the number of serious parental abductions is closer to 156,000 annually of which about 30,500 reach the level of a police report.) 6) Information that is self-reported to the federal government by states or by cities and counties to the states can he questionable. A question like “does your city use performance information,” can get “yes” answers regardless of differing definitions and degree of use. In the past a big-city mayor told us that his community used measurements to make decisions about road conditions. When we pursued the question, it developed that the only data the city had was an accumulation of citizen complaints about potholes. #StateandLocalDataManagement #StateDataQuality #CityDataQuality #CountyDataQuality #SuspiciousGovernmentData #FlawedGovernmentData #BadData #InaccurateGovernmentData #StateandLocalGovernmentDataHazard #StateLocalGovernmentDataHazard #AvoidingInaccurateStateGovernmentDataComparisons #AvoidingGovernmentDataErrors #HowToIdentifyBadGovernmentData #TrustAndFaultyGovernmentData #StateGovernmentDataAssumptions #StateLocalPensionPlanDiscountRate #PublicSectorDataCaution #DataStandardization #SelfReportedDataSkepticism #SpottingCommonGovernmentDataMistakes
- THE ACADEMIC-GOVERNMENT COMMUNICATIONS GAP
We spend a lot of time talking with government practitioners and a lot of time talking with academic researchers. We’ve often wondered about the barriers that keep them from talking with one another as much as they should. That’s why we’ve been particularly charmed by the Office of Strategic Partnerships in North Carolina, which we wrote about in an April 16 column for Route Fifty. The effort there has been to close the gap that often exists between the multiple academic researchers in a state and the government officials who are often addressing the same topics – just in different ways. Here’s what Jenni Owen, the director of North Carolina’s Office of Strategic Partnership, told us last month. “People in academia who say they want to have an impact on policy really mean it. And people in government who say they want evidence and data to inform their decisions also really mean it. But the way they each go about doing that is often clunky.” Witness her experience in a recent meeting with about 100 faculty members and doctoral students. “How many of you are pursuing or have something on your research agenda that you think has implications for public policy?” she asked. Everybody raised their hand. Next question: “How many of you have talked to anybody in government ever about the topic you’re pursuing or thinking about pursuing from a research perspective?” And one person raised a hand. Cautiously. The problem is not one-sided. Government officials often have little time to seek out good research themselves and no easy way to know what’s going on in the multiple institutions of higher learning within the borders of their state or beyond. North Carolina has set up formal ways for government departments to communicate their research needs to universities across the state. But there were also ways that Owen, who has vast experience in both academia and government, and others cited in our conversations about how the relationship could be improved in relatively simple ways. 1. While state and local governments can certainly do a better job communicating the different initiatives that they’re working on, researchers can also do more to actively learn about their own governments. Owen and OSP often advise researchers, to watch the press releases from an agency, for example; pay attention to interim committee study groups; learn about organizational structure; look at departmental goal-setting, strategic plans and areas of cross-department collaboration. 2. Advice also focuses on Initiating communications at the beginning of a research project, not when it’s finished. This requires knowing about new or ongoing government initiatives that might connect with research and touching base at an early stage with a couple of sentences about the relevance of the research to the initiative. 3. That means that before communicating, it's important to understand areas of jurisdiction, a bit about departmental organizational structure and some basics about operations. If the research is about county jails, it’s likely an error to focus attention on the State Department of Corrections, which may have limited responsibility in that area. Likewise, it sends a bad signal to inquire as to whether someone is likely to be re-elected when they’re actually appointed to their position. A few other tips for researchers who want to see their work having impact: Don’t wait until a journal article is published to send it to a government official and hope that they read it, without explanation as to why it would interest them. Make sure that the journal article is relevant to the work the official is doing and include a sentence as to why you’re sending it to them. Understand who the players are below the top level. Communications don’t have to go to the Cabinet Secretary or the Commissioner. As Owen told us, “Don’t assume that the leader is where you need to start.” Consider ways to collaborate. University research may overlap with a government’s own specific research needs. See if the research you’re doing can also address a government’s own specific needs in an overlapping area. Says Owen: “I dream about the day when a researcher says to a government entity, ‘I’m going to go interview new parents in rural areas. Is there even just one question you’d like me to ask?’ “There are not just small gestures of partnership, but they are also substantive. It shows that researchers are thinking about policies and programs and applications of research learning for government decision-making. This can be a game-changer for the partners.” One more thing: The gap between professions; the difficulty in communicating and the caution with which each side approaches the other is something we run into all the time. Our book, “The Little Guide to Writing for Impact” was published last month. It was written in collaboration with Donald F. Kettl and motivated by our mutual sense of a pervasive frustration among academics, editors and publishers that different styles of writing and communication were often standing in the way of getting important research findings across to the practitioners who could put them into action. #AcademicGovernmentCommunicationsGap #NorthCarolinaOfficeOfStrategicPartnerships #JenniOwen #StateandLocalGovernmentResearch #StateGovernment&UniversityResearchLink #GovernmentAcademicCommunication #TipsforAcademicResearchers #StateandLocalGovernmentManagement #University&StateGovernmentRelations #OptimizingUniversityResearchforGovernmentDecisionMaking #UniversityGovernmentCollaboration #ImprovingResearchPartnerships #OptimizingGovernmentResearchNeeds #StateGovernmentResearchNeeds #ApplyingUniversityResearchToPolicy #UniversityGovernmentResearchGap #LinkingAcademicResearch&StateGovernment #B&GReport #RouteFifty #LittleGuidetoWritingforImpact #DonaldFKettl
- WANT TO WRITE SO THAT OTHERS CAN USE WHAT YOU’VE WRITTEN? HERE’s YOUR CHANCE!
In 1799 when Napoleon was bearing down on Egypt a stone slab was discovered that came to be called the Rosetta Stone. It bore text in three forms, including Egyptian hieroglyphics which hadn’t been understood since before the fall of the Roman Empire. The written wisdom of the ancient world had been lost for centuries, but the stone made it decipherable. We want to be the modern-day equivalent of the Rosetta Stone (a peculiar aspiration perhaps for people instead of rocks). Only instead of making ancient script comprehensible in the modern age, we want to unlock the mysteries of the kind of writing done by people trained in academese for the rest of the world. Toward that end, in collaboration with one of the smartest men we know, Donald F. Kettl, author of 25 books and professor emeritus and former dean of the Maryland School of Public Policy, we’ve written a new book titled “The Little Guide to Writing for Impact” (Rowman & Littlefield, 2024). The book presents a series of guidelines that will enable readers to successfully frame a policy argument; pitch it to editors; organize the work so that the ideas have real impact; support it with data and stories; find the right publisher; and follow up after publication to ensure that the argument has enduring impact. It’s aimed for people who want to write everything from short blog posts through op-eds, commentaries and policy briefs, dissertations, articles for both the popular press and academic journals, and books. Truth in Advertising: The major point of this B&G Report is to persuade you to: · Tell others about the book if you think they can make use of it. · Buy the book yourself. · Use the book in your classes if you’re teaching. In short, this is the most self-serving B&G Report we’ve ever written. But we’re just vain enough to believe that it can be of genuine use to you, your colleagues, your students, and your friends. Here are some comments we’ve received about the book: Donna Shalala, Interim President of The New School, and former secretary of the U.S. Department of Health and Human Services commented that the book is “A little book that will have a big impact on policy. Imagine a whole generation who can clearly communicate great ideas!" Katherine Willoughby, editor-in-chief of Public Administration Review and Golembiewski Professor of Public Administration at the University of Georgia, said that “If you want to author a classic book, have your research published in a premier academic journal, complete an award-winning dissertation, or simply write better, consult The Little Guide to Writing for Impact. This quick read is chock-full of golden nuggets that, if engaged, will boost your influence on people and policy through your writing.” Chris Morrill, the Executive Director of the Government Finance Officers Association, commented that “With notes of Strunk and White’s Elements of Style, Barrett, Greene, and Kettl have gifted us a highly practical guide for communicating in a hyper-distracted world. Even with an array of new digital tools and artificial intelligence, at core communicating involves crafting a clear, concise, and compelling message. Barrett, Greene, and Kettl gives us the tools to do so.” Finally (actually there are more, but we’re running out of space, Trevor Brown, dean of the John Glenn College of Public Affairs at The Ohio State University wrote that "If you read it carefully and take its lessons to heart, this little book can have a big impact on the quality of your writing. Useful, readable, and above all sensible, it's pitched to scholars and policy wonks who want to reach a broad audience, but it will be helpful to anyone who puts words on paper and wants them to be read, understood, and to matter." There are two ways for you to purchase this book: Go right to Amazon.com where you’ll find it by clicking here. Alternatively, for readers of our website, we're providing a 30% discount on the book. To take advantage of this offer, click here and after registering to make a purchase, enter the code: WF130. #LittleGuidetoWritingforImpact #StateandLocalManagement #StateandLocalGovernment #WritingforGovernmentImpact #WritingforPolicyImpact #AcademicImpactonPolicy #CommunicatingAcademicResearch #AcademicImpactonStateGovernment #AcademicImpactonLocalGovernment #WritingforImpact #KatherineBarrett #RichardGreene #DonaldFKettl #Rowman&Littlefield #AcademicWriting #CommunicatingWithPolicyMakers #WritingGuide #Barrett&Greene #B&GReport #NewBarrettGreeneKettlWritingGuide #UniversityofMarylandSchoolofPublicPolicy
- THE PERILS AND PRICE OF SPEEDY COMMUNICATIONS
We remember the exciting day when we bought our first IBM PC and a printer for $7,500 back in 1981. (Yes. You read that number right.) Our exciting new computer had no hard drive and its operating systems existed on a floppy disc. Years later, after a few computer upgrades, we heard about this thing called a gigabyte. That seemed like an unimaginable amount of space – probably enough to store all the information in our world. It wasn’t so much later that we had scores and then hundreds of gigabytes on our desks. These days we’re all hot and bothered about the ways we can use AI. So, before we say anything more about the various problems that come along with advances in communications technologies, let it be clear that we’re thoroughly captivated by technology and hope we always will be. But when it comes to communicating with one another we are frustrated by the losses we’ve suffered each time something new comes along. Back in the days when fax machines were the brave new world, lots of time was saved by sending letters instantaneously all around the world. But soon afterwards, every organization had a fax machine, with the numbers on their business cards (those were the days when people still used business cards) and all kinds of hitches began to appear. For example, mass mailings (A free trip to the Bahamas!) started to clog up fax machines. Faxes often didn’t come through. They got ignored as they piled up in a central spot awaiting someone to bring them to their rightful recipient. But that was only the beginning of a downward spiral. E-mails are another example. Soon after we adjusted to communications arriving this way, we began to miss the old-fashioned mail system. Even more, we began to miss the old-fashioned telephone, which allowed you decipher, through the tone of someone’s voice, whether they were sincere or sarcastic. Of course, e-mails have made the world a speedier place. People can exchange information and documents quickly – a major plus for us as researchers. But the negatives have mounted up. For one thing, e-mails have led to an unhealthily 24/7 world. E-mails pop up in the middle of the night and they know no such thing as weekends. For a while, we worked with someone who would send out e-mails on Sunday afternoons beginning by saying “Hope you’ve had a nice weekend,” under the assumption that recipients must be ready to get back to work on Sunday. Then there’s the lack of thought that many people put into what they send by e-mail. People in a rush can sound terse and even rude in an e-mail, even when that wasn’t their desire. Most people have learned that the use of all capital letters comes across like yelling, but that’s a lesson that bears repeating. It’s surprising how little care is taken in getting names spelled properly. Or even using the right names in the first place. Our little company is called “Barrett and Greene, Inc.” You might be surprised to know how many notes we get (and these aren’t mass mailings either) addressed to “Dear Barrett.” Of real frustration is the desire to move so quickly through seemingly endless stacks of e-mail that people never read the entirety of notes they receive, necessitating a long exchange that would have been avoided with five minutes on the phone. Following is the kind of thing we (and we suspect you) go through regularly: Us: “Thank you for your willingness to work with us. Can we talk on March 31, and if so, what time would be good with you? Them: “Yes, the 31st will work.” Us: “Terrific. Just let us know what time will work for you and the best way to reach you.” Them: “How’s 3:30?” Us: “That will work fine. But did you mean Central Time or Eastern Time? And how should we reach you?” Them: “Sorry, I should have been clearer. I meant Central Time.” Us: That’ll work well. But could you please send us the best way to reach you?’ Then we wait for two days and write again asking for the best way to reach the other party, only to get an automatic reply saying they’re out of the office for the rest of the week. Worse yet, from the point of view of style and tone, is the growing number of people who are relying on texts, which often include acronyms that require us searching on the internet for their meaning. We got used to LOL a long time ago. And we picked up on IMHO, too. But the acronyms keep coming. Not long ago we got a three-letter text that just said “NVM.” Turned out it meant “nevermind,” which pertained to a prior text. And if style and tone can be lost in emails they entirely disappear in texts. As far back ago as 1546, writer John Heywood wrote “Hasteth maketh wasteth.” Some things never change. #ChangingCommunications #ChangingTimes #EmailFrustration #EmailMiscommunication #TextingFrustration #B&GReport
- TRANSPARENCY AND TRUST
As we recently reported in the second of a two-part series about Trust In Government for Route Fifty, about 45% of Americans have a less than favorable view of the trustworthiness of local governments, according to data from Polco. That’s somewhat up from 40% in 2017. And while it’s better than the federal government it’s still a very sorry state of affairs. In that series, we recommended several ways that states and localities can help engender greater confidence in their efforts to serve residents; the one that was probably nearest and dearest to our hearts was the use of performance management. Of course, simply measuring everything in sight isn’t going to grab the public’s attention. In fact, it’s repeatedly dismayed us that governments that have robust means of measuring quality are often skeptical about sharing their findings with the public. Some seem to believe that they’ll only be hit over the head with a statistical stick when efforts don't pay off. As Marc Holzer, a well-known academic and author of Rethinking Public Administration, says, “We have a lot of data out there and a lot of performance measures. But most citizens don’t have access to that because it’s not communicated to them. And in many cases, it’s deliberately hidden by management because they don’t want to put themselves in the line of fire.” That’s a big mistake. People mistrust what they don’t understand. They’re more inclined to have faith in an institution that is candid, even if it’s open about mistakes or “performance is proven to be poor,” says Michael Pagano, dean emeritus of the College of Urban Planning and Public Affairs at the University of Illinois, Chicago. “If voters trust that the government is providing accurate information, they will continue to trust.” There’s little question that there’s a strong journalistic urge to put bad news on the front page, while better news winds up someplace on page seven. As The Guardian reported some years back, “people’s interest in news is much more intense when there is a perceived threat to their way of life. They care much less about what happens around them when they enjoy relative peace and/or relative prosperity.” But as true as that may be, we’d like to make the argument that if bad news trumps good news, transparency can help cultivate trust even in times when the news may not be good. This is particularly true at the local level, where people tend to know what’s happening around them. They know when the roads are falling apart. They know when there are homeless people wrapped in newspapers on the streets. They know when their children pretend to be sick rather than attending a dangerous school. Hiding the truth doesn’t help. Rather it’s telling the truth – good or bad – and telling the public what’s being done to make it better. #TrustInGovernment #StateandlocalTransparency #PublicSectorTransparency #StateandLocalManagement #StateandLocalPerformanceManagement #RouteFifty #POLCO #RethinkingPublicAdministration #MarcHolzer #MichaelPagano #ReportingStateandLocalPerformance #StateandLocalMedia #StateandLocalCommunications
- WHY MANY STATE AND CITY RANKINGS DEFY REALITY
Back some years ago, when we first started to evaluate management capacity in states, counties, and cities for the now defunct Financial World magazine we were forced by the editors and publisher to rank the entities we were evaluating from best to worst. We hated that for many reasons. As far as we could see the difference between number 29 and number 30 wasn’t even marginally significant and yet these comparisons were often picked up by the local press. That made the publisher happy as he loved to get lots of attention, but it never ceased to bother us. Subsequently, when we began our work on the Government Performance Project, we took great care to make it clear that while we were evaluating and even grading the states, we weren’t ranking them. We carefully avoided ever using that word preferring to refer to our “evaluations”. Perhaps the GPP, which utilized the skills of many highly regarded academics and a team of journalists didn’t stir up the same kind of media frenzy than the far-far-far less-rigorous Financial World work (which was entirely done by the two of us), but the leadership at Pew and Governing were more interested in contributing solid useful information to the world of public sector management than they were in creating a stir. In the years that have passed, it seems to us that there must be some kind of gold mine in the field of publishing 1-50 rankings of the states and similar lists of best and worst cities. And we cringe when we see many of them, for a variety of reasons. Forbes (which seems addicted to these kinds of lists) went so far about a year ago as to publish a 50-state ranking titled “States With The Most Devoted Dog Owners.” According to the article, the ranking was based on a survey of 10,000 dog owners (200 per state) and compared them across seven metrics, including “the percentage of dog owners who broke up with a significant other who didn’t like their dog.” Apparently, “6.78% of dog owners broke up with a significant other who didn’t like their dog.” Woof. Beyond the dubious nature of this kind of metric, and the value of such a list in the first place, the idea that you can get a solid sampling by asking 200 people from every state, regardless of its size, has zero merit. California dog owners were represented by about .0005% of its population. We don’t want to get distracted by criticizing this kind of foolishness, though. That’s like shooting fish in a barrel. We’re far more concerned about rankings that are taken somewhat more seriously. For example, though we won’t be the first or the last to complain about the value of the U.S. News rankings of universities, they’re worth mentioning here. For starters, these rankings always seem like a dangerous exercise to us, as we see families making decisions about college selections based on these rankings instead of the value of the program to which the high school senior is applying. Beyond that, there have been plenty of criticisms of the methodology used to make these lists. Beyond the specifics, there have been lots of complaints about the ever-shifting methodology which makes for significant changes in the rankings themselves. As Daniel Diermeier, chancellor of Vanderbilt University wrote in Inside Higher Ed, “Does this mean those of us who’ve fallen in the rankings are objectively worse than we were a year ago? Does it mean a university that shot up the list is suddenly orders of magnitude better? Of course not. The shifts in rankings are largely due to the changes in methodology.” This raises two questions: Was the last year’s methodology wrong and that’s why there was a change? Or is it in the interests of the publication to see changes from year to year in order to make the horse race more exciting? If all this wasn’t cause for concern about the validity of these rankings then consider the January New York Times article that pointed out that “U.S. News sells ‘badges’ to colleges so they can promote their rankings – whether they are 1st, 10th or much much lower.” While the college rankings are probably the best known, there are also a plethora of lists of “best places to work.” We can’t begin to enumerate all the potential flaws in these lists, but the degree to which they vary from ranking to ranking isn’t a very good signal that they should be regarded as valid. For example, one list of “the best and worst states for work-life balance,” indicated that New Hampshire was the best of the lot. But then there was another ranking that claimed to demonstrate that New Hampshire was the worst state to be a teacher. Don’t teachers care about work-life balance? We’ll bet they do. Let’s say for the sake of argument – and we don’t believe a word of it – that both lists were accurate, teachers reading the first one could be heading as fast as they can to New Hampshire only to find out that in their profession they’d be better off anyplace else. Finally, let’s think a bit about the “best places to live list.” Best for whom? These are almost always blunt instruments for coming up with a very complicated answer. Some lists use the level of home ownership as a measure of a good place to live. But that would mean that Manhattan is probably not the place for you, where high costs mean that only about 24 percent of the population own their own place. But there are clearly other reasons some people love living in Manhattan. We did for over 35 years and cherished every minute of it. All while paying rent. One more: Let’s say that in your opinion low taxes are a wonderful way to pick your home state. Lots of lists rank the states by that criteria and you’d be led to believe you should head for Florida, which is famous for its exceedingly low tax burden. But d0 you have children in schools? Then it may be important that your teachers are well paid, and on that measure, Florida could hardly do any worse. Take things a step further and assume that you only care about low taxes and have no interest in the children of the state – but you happen to be a member of the LGBTQ community – well we don’t need to say any more about that. #StateandLocalManagement #StateandLocalManagementRanking #FlawedStateRanking #FlawedCityRanking #FlawedBestPlacesToLiveRanking #FlawedUniversityRanking #USNews&WorldReportCollegeRanking #ForbesRanking #GovernmentPerformanceProject #FinancialWorldStateRanking #FinancialWolrdCityRanking #FinancialWorldGovernmentRanking #GoverningGradingtheStates #InsideHigherEducation #SillyStateComparisons #StateRanking #CityRanking #BestPlacestoLiveRanking #CollegeRanking #B&GReport
- “You Might Be Right”, A B&G Report Podcast Recommendation
In the tension-filled political week before the midterm elections, we were particularly grateful for a relatively new podcast called “You Might be Right,” a mostly weekly, sometimes biweekly show that features two former governors from Tennessee talking about a wide variety of issues. The particularly refreshing part of this show for us is that Governor Bill Haslam is a Republican and Governor Phil Bredesen is a Democrat, and they talk to each other about complex issues in the most civil terms without attack or easy soundbite. In the first months of the show, produced at the Howard Baker Jr. Center for Public Policy at the University of Tennessee, Knoxville, guests have been invited to explain their positions on opposite sides of such topics as the Affordable Care Act, Charter Schools, the National Debt and the Affordable Housing crisis. The governors ask questions, talk about the issues, and occasionally reflect on their own leadership experiences, leaving listeners with lots to think about and no easy answers. The Affordable Care Act discussion, for example, which aired on October 26, gave the governors an opportunity to question Nancy-Ann DeParle, one of the architects of the Act during the Obama administration and Larry Van Horn, a Vanderbilt professor, who has been a noted critic of it. At its conclusion, one of Phil Bredesen’s comments deftly summed up his reaction to the discussion, making a key point about the nature of the podcast at the same time. “Most problems benefit from picking and choosing a little bit from different ideologies. And I think both of us have had the experience of doing that. So, I looked at both of these not in terms of one is the right answer and one is the wrong answer, but what can we glean from each of them, which can be useful in solving the problem.” The show’s title is based on the late Senator Baker’s belief that solid answers to vexing problems emerge when listening to someone on the opposite side. “In politics, the competition for ideas, the competition for the right to serve is fundamental and it is political. But it must be accompanied by a decent respect for the other fellow’s point of view. Because if you don’t do that, the whole system falls, it collapses, if you don’t admit that the other person may be right from time to time.”' New podcasts are generally posted on Wednesdays, with access to episodes here. We like the fact that transcripts are also available as it’s sometimes tricky to tell which governor is talking if you don’t come from Tennessee and have a solid familiarity with their voices. Listen and see what you think of this B&G Report Recommendation. And if the spirit moves you, suggest other podcasts that might be of interested to those of us who are dedicated to state and local government. #DedicatedtoStateandLocalGovernment #StateandLocalPolicyImplementation #IntergovernmentalRelations #QuotesaboutGovernment #BarrettandGreeneRecommendations #PolicyImplementation #PolicyOversight #PodcastRecommendation #StateofTennessee #HowardBakerJrCenterforPublicPolicy #GovernorPhilBredesen #GovernorBillHaslam #AffordableHousing #AffordableCareAct #YouMightBeRightPodcast