OK. Let's go ahead and get started. Can everybody hear me? So welcome to Grand Rounds. My name is Karen Warburton. It's my pleasure today to introduce my friend and mentor, Dr. Jennifer Kogan. Dr. Kogan is a professor of medicine at the University of Pennsylvania. She's been at Penn for her entire career-- as a medical student, resident, general internal medicine fellow, and then she joined the faculty after fellowship. She's quickly risen through the ranks and is now a full professor in a very competitive research track at Penn. And I just have to say that she-- having watched her as one of her mentees-- it's been so much fun to see her trailblaze through this track that has traditionally not been one that medical educators have succeeded on. And she's really paved the way for many other medical educators there to go through this track. She wears many hats at Penn. She was the clerkship director of the internal medicine clerkship for 13 years. She's been the director of undergraduate medical education in the Department of Medicine at Penn, and is now acting as the assistant dean for faculty development. At a national level shoot has been the president of the clerkship directors in internal medicine. She's held many other national roles. Has been recognized widely by awards, both for her direct teaching abilities, and also for her medical education scholarship, and she's a national expert in the field of assessment and feedback. And she's here to talk to us today about the importance of direct observation. So welcome, Jen, thank you so much for being here. [APPLAUSE] OK. So thank you, all, for having me. It is really wonderful to be here. And, I guess, the one other thing I just wanted to add-- thank you for that introduction-- is when I think about direct observation and we talk about it this morning, I have all of these administrative things, but I'm a general internist. I have medical students who rotate with me in the office. I precept in resident clinic, and although I do research on direct observation and assessment, I think what also informs a lot of my thoughts about what we're going to talk about today is my role as a boots-on-the-ground faculty member, working one-on-one with students and with residents. And I hope to kind of bring that practical perspective to the talk this morning or this afternoon. I have no disclosures. So what I hope that we will talk about today is that by the end of this session, you will be able to explain the importance of direct observation as an assessment strategy in competency-based medical education. That you will be able to describe the factors that impact the frequency with which we observe learners with patients and the factors that get in the way of the quality of those observations. And then, hopefully, leave you with some strategies that can improve both how often direct observation occurs and how to improve the quality of that direct observation. So the roadmap for this talk is that I'm first going to spend just a few minutes talking about what competency-based education is and the role of workplace-based assessment, which is just the fancy terminology for direct observation. We're going to talk about why it is so important to have direct observation, strategies to increase observation frequency, strategies to increase learner engagement in direct observation, and then strategies to improve the quality of those assessments. So starting off with what is competency-based education and workplace-based assessment? So I am sure everybody in this room is familiar with the Institute of Medicine's "To Err is Human" and "Crossing the Quality Chasm." And these reports, I think, served as part of the tipping point for why there was a movement to competency-based education. So as you know, these reports highlighted the 98,000 deaths in the United States per year in hospitals, calling out the systems in which that care occurs, and the recognition that some of our systems are not necessarily training a future workforce to meet the needs of this country. And there was a real concern at that time that we are a self-regulated profession and that we were going to lose the ability and the right to self-regulate if we could not convey to the public that we are trustworthy to put out in our health care workforce what this nation needs. And I think we started to quickly realize we better get control of our house in order to continue to be a self-regulated profession. Competency-based education is an outcome-based approach to the design, implementation, assessment and evaluation of a medical education program using an organizing framework of competencies. So what that means is you define what is it that we want at the end and then what is the curricula that we need to ensure that we get to that end, and what assessment strategies are going to ensure that we get to that end. So a simpler way to think about this-- so it says, "I taught Stripe how to whistle. I don't hear him whistling. I said I taught him, I didn't say he learned it." So when I trained, it was very process and structure oriented. I would have gone to a whistling lecture. I would have had a whistling clerkship, a whistling sub-I, and then I would finish medical school or training and somebody would say, well, by doing all of that she could whistle. In competency-based education we say whistling is really important, and before you finish training, and before you go out and I entrust you, I'm going to make sure that you can whistle. And I'm going to have the training experiences to ensure that you can whistle and whistle well. So what the paradigm shift has been in competency-based education is that we are moving-- slowly-- from what has been fixed length, variable outcome training-- you go to medical school for four years, you do residency for three years, and we assume that because of that time the outcome is the same, which we know it's not-- to what is called variable, length fixed outcome training. I define the outcome of what it is to practice well, and it may take individuals different amounts of time. Competency-based education is much more about knowledge application. It is criterion referenced, which we will come back to shortly, and there is a tremendous focus on feedback or formative assessment. One of the key assessment strategies in competency-based education is direct observation or workplace-based assessment, which is essentially the assessment of day-to-day practice in the authentic clinical environment. It's watching what a doctor does in practice. So this is Miller's pyramid. And Miller's pyramid is a way-- different strategies-- that we can assess competence. So at the base of Miller's pyramid, you are all familiar with multiple choice tests. Great for assessing what somebody knows. But what we want to figure out is what is somebody able to do. And the assessment strategy for identifying what somebody can and cannot do is observing them in actual patient care. And you can appreciate that that assessment strategy is a much better assessment strategy for assessing expertise and in a much more authentic way. There's an entire tool box of workplace-based assessments. So one example is the end of rotation evaluations that get filled out. The workplace-based assessment strategy that I'm going to focus on for this talk is direct observation of clinical skills-- so watching a learner in the context of patient care doing important patient care tasks. So some examples of what that is are things like watching somebody take a history, watching somebody do a physical exam, watching somebody talk to a patient about a diagnosis or a plan, watching somebody perform a procedure. And while I'm largely going to be talking about observation of a doctor-patient encounter, there are other things we observe too. How does somebody work in an interprofessional team? How does a senior resident be a team leader to a medical student? Those are important observations as well. So why is direct observation so important? Well, I think there are several reasons, some of which focus on the learner, some of which focus on the patient. First and foremost, if we think about what the goal of patient care is and what the purpose of training is, I would argue that the goal in patient care is that patients get high quality care, that their care is safe, it's effective, it's patient-centered, it's of high value. We know that some of the key drivers to that quality are getting the right diagnosis. And you all know that history taking and good history taking gets you to the diagnosis 80% of the time, and that in order to avoid unnecessary testing or avoid diagnostic error, you have to be really good at history taking. We also know that faulty data gathering is one of the most common sources of diagnostic error and a common source of doubt. So this very fundamental skill is really important in patient quality. We also know that how you communicate with a patient is really important. And what this slide lists are many of the patient outcomes that are associated with the ability to communicate in a patient-centered way. And this is not like being a nice person. You can be a nice person and not communicate in a patient-centered way. This is a skill set. Trainees come into training with variable skills. And so although you graduate medical school and come into internship and residency, there is a wide variability in that skill set. And Monica Lypson at the University of Michigan has done some interesting work where all of her incoming interns do a standardized patient assessment and some core skills around like history taking, physical exam, and informed consent. And everybody has different strengths and areas and opportunities for growth. We know that variability in skills is not just related to trainees, but related to practicing physicians as well. There was a study a while back done out of Jefferson, where they took third-year medical students, residents, and general internists-- and I believe there were some cardiology fellows in there as well. They did not include cardiology attendants in this study-- and looked at their ability to diagnose a murmur. And the ability to diagnose a murmur remained flat after the third year of medical school. Clarence Braddock has done some work around shared decision making-- audio recording physicians talking to patients about something that required patient input. Very uncommon that shared decision making occurred. So there is a lot of variability in some of these very important skills. So if the goal of training is safe and effective patient-centered care-- and we know that there's variability in the skill level of our students and our residents as well as our attending physicians-- when you think about it, the goal of being a supervisor to a learner is to fill in the gap between what a learner is able to do and what a patient needs. And I would argue that it is very difficult to know what gap you're filling in if you haven't observed the learner. So if you don't observe, one of two things is going to happen. You're going to give somebody autonomy that they are not ready for, which could put the patient at risk. Or you over-supervise somebody who actually could use some autonomy, and you stunt their development. So good supervision requires a good knowledge of what a learner is able to do. The second reason why observation is so important is that it is a fundamental step in the learning cycle, which is shown here. So you set goals, you observe, and then feedback, and quality feedback, is based on the observations that you're making. So you can take a look at these individuals and sort of think to yourself, what is it that they have in common? And I once was doing this in a workshop, and the first thing that came out was that they make a lot of money, which is not what I was going for. But I'm sure many of you thought these are individuals who are experts, right? And you might have thought they practiced a lot, and they have coaches. So if we think about, our goal in training is really to train experts. And I think competency-based education, competence just-- that's the floor, right? We are trying to train experts. So in order to gain expertise, you have to practice deliberately. And these are the steps of deliberate practice. And part of what deliberate practice entails is getting informative feedback. And those of you who came to the feedback workshops, heard me talk a little bit about this earlier today. This is a slide that demonstrates how we learn things. So when we first are learning something, we're in what's called the cognitive phase of learning. It's where you really have to think about everything that you're doing while you're doing it. So remember when you first learned to drive a car and you had to think about like, where are my hands on the steering wheel? How much pressure do I apply to the accelerator and the brake? And if you were like me, I couldn't talk to somebody while I was driving. I couldn't listen to the radio because I had to concentrate on driving. That's the cognitive phase of learning. So then with 20, 30, 40 hours under your belt, you get to the associative phase of learning. You don't have to think about it quite so much. And now, I can drive into work from the suburbs on the Schuylkill Expressway and I don't think about driving, unless there's really bad traffic or bad weather. I drive, and I'm thinking about a lot of other things, I show up at work, and I don't remember actually driving. It's autonomous. What you can see in this slide is once something becomes autonomous, you stop improving. And what experts are able to do is they are able to remain in that cognitive and associative a phase of learning. They do so by always setting a goal a little bit beyond where they're currently performing so that they are always practicing. When it becomes autonomous, not only can it plateau, but performance can actually decline. In order to practice deliberately, it requires that you know what the goal is, and that that goal is fairly well-articulated with a clear mental representation of what the task looks like. So if I'm going to improve my shared decision making, there needs to be some agreement about what does that look like. And we're going to come back to this in a little bit. There's a wonderful article I talked about this morning from Atul Gawande, back in The New Yorker in 2011. And I think he just does a great job highlighting the value of direct observation, not only as a trainee, but as a practicing physician. He reflected on the plateau in his operative morbidity rate, thought about the fact that athletes continue to have coaches even when they're experts, wondered what it would be like to have a coach come into the OR with them, contacted a surgical mentor, had this mentor scrub in, and got feedback after the case. And it describes in the article, he says this "20-minute discussion gave me more to think about and work on than I had in the past five years." We have blind spots, and it's hard to know what to work on without somebody else helping to calibrate your self-assessment. So I'm largely today talking about direct observation of learners, but I would argue there would be tremendous benefit in direct observation of practicing physicians as well, so that we all can continue to improve. So in order to be a good coach, you've got to watch. So I personally think Michelle Kwan was one of the greatest skaters ever. My husband thought Tara Lipinski was much better because she won the gold, Michelle didn't. But if you think about a skater like Michelle Kwan, and if her practice paradigm is that she shows up on the ice, her coach is in the coaching room. She goes on the ice and she practices for 20 minutes. She walks down to the coaching room. She says to her coach, this is what my triple axels looked like. I think I might have landed on an inside edge. And the coach asks some clarifying questions, gives her some suggestions. She goes back to the ice and she practices again-- would she have achieved what she achieved in skating? And we know that the answer is no. And yet, in medicine, and particularly the more cognitive domains in internal medicine, that is often our training paradigm. We send learners into rooms. They collect information. They come back to the teacher, they present. We ask some questions, and maybe we go back and we clarify. But just because somebody can present a case doesn't mean that in the room the way they ask the questions, the order they ask the questions, how they ask the questions, how they formed a relationship with a patient, we base a lot of assessment on what is called proxy information. And we'll come back to that. So the last piece of all this-- I must be hitting my thing while I'm talking, let me go backwards here. All right, so coaching requires observation and then obviously feedback after observation. So the idea in competency-based education is that you make multiple observations in multiple settings over time. Each observation is like a piece of the puzzle. You have enough pieces of the puzzle, you form a picture. The puzzle is not complete. I don't have all the puzzle pieces, but you know what that picture is. And at the end of the day, you have enough puzzle pieces to say, you know what? I entrust this person. I believe that they can go out and they can provide safe, effective, patient-centered care. Because at the end of the day, what sits at the top of Miller's pyramid and what is directly affected by what we do is the patient. So in summary, three reasons why direct observation is so important. One, I think it's fundamental to ensure that patients get the care that they need and that folks are appropriately supervised, but appropriately given autonomy when they're ready for it. Two, direct observation and feedback is a critical component of mastery learning. And I think in education we are trying to train masters and experts. And then three, to make a defendable entrustment decision, you've got to observe. So strategies to increase observation frequency. So I run a lot of workshops on direct observation that are much more interactive than this, and I ask folks, what are the barriers to direct observation? And what would you imagine the top barrier is to doing more observation? Time. Time comes up. And no matter what discipline I do it in, it is always time. There are other barriers too. Time is the one that always comes up first. And that is real. That is a real-- I know what it's like being in clinic. I want to go in and observe and then there's going to be four residents waiting, and then the whole morning's backed up by 9 o'clock. So I get that. There are other barriers for faculty doing observation. We don't want to get in the way of the relationship between a learner and a patient, particularly as learners move further along in training to residency and to fellowship. A lot of folks feel uncomfortable doing direct observation, both in what they're being asked to assess, knowing what the standards are, knowing how to give feedback after you've done observation, and then if you've identified something, figuring out how to help and make it better. I'm going to focus mostly on time in the next few slides to give you some ideas about how to get over this barrier of time getting in the way of doing more direct observation. So I think when you say to people, we need to do direct observation, the first thing that individuals think that you're asking them to do is watch the whole thing. You're going to go watch a whole history, with a physical exam, and talking to the patient. We do not have time for that. What I would like you to think about is the fact that you don't have to watch the whole thing. There is so much you can learn watching for five minutes. And so I call these observations snapshots. The key is you have to watch things that are meaningful and that are important. And so the goal of these observations-- in terms of what's meaningful and what's important-- so one, meaningful and important to a learner. What is it that they are working on? What are they not sure about? What do they want to get better at? What are the skills that you as the supervisor know are critical to the profession that you want to make sure that they're able to do? And then what are the skills that this is really important for the patient, and we need to get this right. And so maybe somebody with some more expertise being there wouldn't be such a bad thing. So thinking about how can we structure observation to achieve some of these goals, while focusing on authentic clinical work. So inauthentic clinical work is something like, you already went and took the history, and I'm going to go watch you do the history again. That's just weird and awkward. So how do you embed it into real work? Because we don't have a lot of time, how can you structure these observations so it doesn't always take extra time? And a 3-for-1, how does it help the patient? So I will share with you some examples of my 2-for-1s and my 3-for-1s. So when I used to do inpatient medicine, my 2-for-1 is that I'd come in early, and with the students or the interns, we'd pick one patient and I would pre-round with them. You see history, you see physical exam, you see talking to a patient, and I'm not going to have to see that patient later in the day. And because I was right there, we can actually make a plan, which saves time later on rounds. Doesn't take extra time. If I'm precepting in-clinic, one of my two favorite observations, one is agenda setting. Watching the first three minutes of a patient encounter. Most of our residents don't agenda set. It's very hard to be efficient in an outpatient practice without agenda setting. So it helps later on, particularly for the resident. Another sort of saving time, patient comes in with shoulder pain. The residents know take a history, don't do the physical exam, come get me. We'll go back together, and I will just watch you do the shoulder exam. Because honestly, by the time the resident comes, tells me the shoulder exam-- and I was awful at a shoulder exam in residency-- so they kind of tell me, I don't know what's going on with the patient. If they don't know what's going on with the patient, I have to go back in anyway. The patient now has to have their painful shoulder examined twice. So you just do it all at once. It saves time. And the more you do this, the more you find these little ways to just embed it. So here are some other examples of snapshots. Part of a physical exam, just watching somebody start a medication on a patient, explain a diagnosis, talk to somebody about quitting smoking, code status, informed consent, are all, I think, terrific snapshots. And then, I think, at a programmatic level as you think about what is the program trying to achieve, there are milestones. And so what would we need to observe to sort of say, yes, learners are hitting these milestones. And so I think there are opportunities for program directors, and clerkship directors, and fellowship directors to say, look, here's what we need to assess. What are the kinds of observations we would need to make to know that a learner is able to do these important tasks? The other strategy is the sort of divide and conquer strategy. So for example, maybe not everybody has to do everything. Maybe in the general medicine clinic we're really focused on counseling around starting a medication. And maybe in the NICU it's about code discussion and informed consent. And maybe on a general medicine service it's more about like history taking and physical exam. But part of the way to get buy-in for this is to sort of identify what do different rotations find important? Because they will be much more likely to do observation if they feel that they're observing important things. The other piece of doing this to save time is to recognize you only really have to be in the room long enough to come up with three things to talk about. Once you have three things, leave. And a learner can't really process more than a few pieces of feedback anyway, it's overwhelming. So I stay in the room long enough to come up with a few things, and then if I'm busy, I leave. And obviously, you have to set that up, otherwise people think you're rude, just like walking out in the middle. All right, so strategies to increase learner engagement. I think one of the biggest aha moments I had in the past year or two is that I was running lot of workshops on feedback and direct observation to faculty. And the literature increasingly was describing how learners feel about all of this. And I realized we never like run feedback workshops about how residents or students seek out feedback, and why it's important, and how do you get learner buy-in to direct observation? And so I think that part of the important conversation around improving observation is having conversations with the individuals who are actually getting observed. So learners don't like being observed. And I remember I was observed twice in medical school. I somehow made it through all of introduction to clinical medicine with a physician researcher who practiced out of a space in the middle of campus. So they gave me the name of the patient to see. I went and saw the patient. And then I walked to the Leonard Davis Institute and met with my preceptor, and I talked about my history and physical exam-- having no idea what I was doing, by the way. And my first patient ever had a halo. And I just was so worried I was going to like hit this person's spine moving them. I was observed twice during medical school. The other time that I was observed was on my medicine clerkship by an attending. I was so nervous because nobody had ever watched me before. And it was like a patient with heart failure. And I figured out it was heart failure, but I didn't even know there were all these questions you were supposed to ask about why the patient had a-- it was awful. It was totally awful. And I think because observation happened so infrequently, and I wasn't sure what the purpose of it was, and I felt like I was being graded and it wasn't for feedback, it really was uncomfortable. And so on this slide are a lot of reasons why from a learner perspective, direct observation is challenging and difficult. One, they recognize that we're busy, so they're not going to necessarily ask to be observed. Increasingly, training is becoming really-- there's a lack of continuity between supervisors and learners. So when I was a medical student, I had an attending for four weeks. And then I went to dinner at that attending's house at the end of the rotation. We have some rotations now that are a week long. And we know from the literature that having that relationship, and being observed and getting feedback, works way better when you actually know the person that you're working with. Think about when you were an athlete or a musician, you had your coach for years. And that can really make a difference. It's anxiety provoking. It ends up being, oh, this person needs four direct observations, so we watch things that are irrelevant. So it becomes this check-box activity. Two very important values in medicine are autonomy and efficiency. And direct observation, potentially, if not done well, learners can perceive that it is getting in the way of their autonomy and their efficiency. And then finally, this question of when you're watching me, what is the purpose? Is this for feedback or is this for assessment? And even if it's for feedback, if it happens once or twice, it's assessment. It is high stakes. So part of improving observation is changing the culture around observation and the purpose of training. It's something called self-regulated learning. So this is the idea that we have to empower learners to seek out observation. And what that entails is talking to learners about why is observation so important? That this is essential for mastery. You cannot become the physician you want to become if you are not getting good observation and feedback. Learners have to feel empowered to know what is it that they're working on, and what do they want to be observed doing so they can get that feedback. And that's what sort of this self-regulated learning idea is. And I think we have to do a much better job of saying, you know what? In my experience in intern in the first six months, this is what many interns need to work on. And normalize the fact that there are skills that everybody needs to work on. And then finally, to encourage authentic behavior during observation. So this is the idea when I am observing in-clinic residents, and we're walking down the hall to go into a room and I say, you know, when we go in the room, I really want you to try to do what you normally do. I know you type on Epic. Don't like pull your chair up to the patient. I know you don't do that. And what you need to know is what you authentically do. Is it good? [INAUDIBLE] chair [INAUDIBLE] If you don't know when to take your hands off the keyboard and pull the chair up-- and if you can always pull the chair up-- I want to know how you do that, and then do your charting. But you have to try to get people to practice authentically so they can get feedback on what they're doing. The other piece of this, I think, is just to recognize what we are trying to shift to is a growth mindset, as opposed to a fixed mindset. The growth mindset is that I want to know what your areas of growth are. And I want you to recognize that that's an opportunity. And that just because you can't do it right off the bat doesn't mean you're never going to be able to do it. So as you think about what your observation snapshots are, I think an important piece is to encourage learners to identify what is it again that they want to be observed doing. So asking questions like, who's the most challenging patient on the service right now? Who's the most challenging patient on your schedule? What are you working on? What is it that you want me to observe or what do you want feedback about? So as we're walking down the hall, and I'm trying to encourage somebody to practice authentically when I'm in the room, I also say, while I'm in there, what is it you want me to focus on? They look at me like, what are you talking about? And I'm like, no, really. Like if I'm going to be in the room, what do you want me to pay attention to? And so that way I can try to provide feedback about things that they are interested in. I also provide feedback about things they are not asking me to observe. The other piece of this is to think about how you preserve that relationship between the learner and the patient. And there is a technique called triangulation. So where you put yourself in the room as the observer matters. So if I am the attending and I'm watching this resident interview a patient, I am sitting in a place that is in the peripheral view of the patient. I still can see the patient's face, which is important to pick up nonverbal cues, and I can see the learner as well. Despite that, patients oftentimes will look to the individual in the room who they perceive to have the most authority. So one, you've got to make yourself a fly on the wall, which means you can't be checking your phone, you can't be tapping your pen, fidgeting all around. And most importantly, you cannot speak and interrupt, which is going to be more or less difficult for some of you. So you've got to be a fly on the wall, unless something egregious is happening. Without being a sociopath, you need to avoid eye contact with the patient. Because the more you engage the patient, because of that hierarchy, the more they're going to start looking at you. You have to look a little bit, but you minimize it. Even if you do all of those things, inevitably the patient is still going to look to you. And so all you need to do in those situations is as the patient is looking at you, you just look back at the learner, and then the patient will look back at the learner. And then if the patient starts engaging with you and talking to you, you say, you know, that's a great question. Dr. Warburton, what do you think about that? And it takes it back to the learner again. So just some strategies about how to maintain learner autonomy in the room. And then the last piece of this kind of learner buy-in and maximizing learner buy-in is the idea of observing longitudinally. So as I mentioned earlier, I think the best observation and feedback-- and that feedback being received really ideally happens in the form of more longitudinal relationships-- learners are much more willing to be observed by somebody they know. The beauty of it is you can then see individuals incorporate feedback and see them grow. It supports their autonomy because as you see them grow, and you can step back a little bit, and there's definitely more credibility and trust in the feedback that you are giving. So the last piece that I wanted to talk about is strategies to improve assessment quality. So we've talked largely to this point about how do you just get in the room, do these observation snapshots, create a culture that's a growth mindset, get learners to buy in and help to contribute to what it is that they would like to be observed doing, but the last important piece is how do you improve the quality of the observations? So although workplace-based assessment is a key assessment strategy in competency-based education, we know that the quality of those assessments is poor. So if I were to show all of you a video of a resident with an attending, and I asked you to score it, you guys would be all over the map. We don't have time to do it, but just trust me, you'd be all over the map. And part of the problem and part of the low inter-rater reliability is number one, oftentimes we're not accurate in our observations. We miss things that learners have done well, and we may not pick up on important errors. There was an interesting study that was done back in the '80s where they had faculty watch videos of residents with patients, faculty were able to identify 30% of the errors that were scripted into the case. And even if they were given a checklist of important behaviors, could only get 60% of what were the relevant things that should have been done in that encounter. So that's poor accuracy. Variability is the idea that each of us has a different thing that we're looking at when we're observing. So I might be focusing on communication skills. Dr. Warburton may be focusing on whether it's hypothesis driven. And we may have very different ideas of what's considered acceptable. If you give folks a numerical rating scale, there's also rating errors. So there's the halo effect. I really like you, ergo, you are smart. You have a great font of knowledge, and you are a great communicator. That's the halo effect. You generalize a positive attribute to lots of other competencies. The opposite of that is the horn effect. This is the idea that you do something that I don't like, and then I penalize you across the board. Leniency, folks who are easy graders. Stringency, folks who are more stringent graders. And then cognitive bias. So this is the idea that if I'm, let's say, working with somebody who's doing a good job-- not great, not bad, good-- and the learner that I just worked with was somebody who was a superstar-- my like, "good, not great" doesn't look so good anymore, right? There's this recency bias. Or the flip side of it, that very same learner, if the person beforehand was really struggling, they start looking really good. So that's an example of a recency bias. There's lots of other cognitive biases related to things like gender, height, many. And all of those influence the quality of the assessments. In some of the research over the past few years that I have done, I have been really interested in why is there so much variability in assessment? And so one of the things that we've learned is we oftentimes hand people numerical rating scales that look like this. And we give them anchors that look like this. And we never say what is satisfactory. And so people interpret these scales maybe like this. Well, satisfactory, is you're doing what I think you should be doing for somebody at your level of training. The problem is everybody has a different idea of what like a second-year resident should be able to do. That's normative. We did a qualitative study and people said, I have no idea why it's satisfactory. I just know it in my gut. I can just tell. So a gut instinct isn't necessarily bad, and it may not be wrong, but if you can't break down to a learner why you think it's satisfactory, it's going to be difficult to give effective feedback. This is probably the most common frame of reference that evaluators use. So satisfactory is, is this what I would do myself? So this raises a big elephant in the room. So what's the elephant in the room, if the most common frame of reference we use is our self? Right, so is the person who's doing the assessment the gold standard? And I remember when I first started out, and even now, there's skills that sometimes I'm asked to observe and I know deep down inside I am not the gold standard for this skill. And I shared with you some data earlier, but there is a lot of data that the assessors themselves oftentimes have really variable clinical skills. You may be wondering, well, so does that influence how they assess? So I was interested in that, and so we did a study where we bought a bunch of residency educators in. We put them through a standardized patient exam, and then we had them watch videos of residents with patients. And we looked to see if there was a relationship between their own clinical skills and how they assessed residents. And we made sure that the checklists were relevant checklists, not like someone's coming in with abdominal pain and you have to like do extraocular movements. Like the checklists were important for what you would need to say or do. And what we found is that those individuals who had higher history taking and physical exam scores, and those faculty where the standardized patients felt that they had better communication skills, were more stringent in how they assessed learners. So there is some relationship between your own clinical skills and how you assess. And then, interestingly, very uncommon was assessors using what is called best practice or criterion-referenced approach, where they say, you know what? I just watched shared decision making, these are the eight steps of shared decision making, and this individual did four of the eight things. That would be criterion-referenced or best practice. And that was a very infrequent standard that assessors used. So the way we've handled this-- the medical education and assessment-- if assessment's not going well, we try to make better assessment forms. And the reality is is the form is not the magic bullet to improve assessment quality, and it is certainly not the magic bullet to improve the feedback that learners get. So in order to improve assessment, what it really requires is assessor training. And the goals of assessor training are that the folks who are coming to do these assessments have a similar basis for the assessment. That there is some agreement about what it is you're looking for. There is some agreement about what is considered competent. And to move to a criterion-referenced assessment, where the criteria are grounded in what we know equates to safe, effective, patient-centered care. So one of the rater training techniques is called performance dimension training. And you probably have now figured out I really like cartoons. So performance dimension training is a rater training technique that really answers the question, what is it that I'm supposed to be looking at when I'm doing observation? So the way that it works is you have a group of individuals come together and you pick a skill. And let's say that skill is starting a medication, a new medication for a patient. And you say, I want you as a group to generate a list of what needs to happen when you're starting a medication. What do you need to talk about with a patient, and how are you going to talk about it? And you discuss the criteria or the qualifications for like what is it that you need to do to start a medication? There's actually lists out there already of what is important to talk about and how you're supposed to talk about it when you start a medication, so you can then compare the list the group generates with what you know is already out there. And the purpose of this is not to create a checklist, but to have a shared understanding of what is it to start a medication on a patient. What we know from research is that when individuals do performance dimension training or rater training, when they go to observe what they say is, you know what, I was able to go in and observe. I knew what I was supposed to be looking for. I felt like I had a standardized approach. I actually felt like I was paying attention to more things than I normally do. Maybe before I was only focusing on did they tell the patient the name of the medicine, but now I realize I pay attention to these other important things as well. And what we heard from many individuals is that they actually paid much more attention to communication skills as opposed to just the content of what was being asked or discussed. Individuals who did performance dimension training said, you know what, it not only helped with observation, it helped with my ability to give feedback. Number one, I could talk about more things when I was giving feedback. Two, I actually had the words to describe the behaviors. I was able to say the way in which you started off the encounter and didn't ask open-ended questions, I noticed you didn't do a teach-back at the end. So having some language around the skills. Individuals said that it was much easier to give corrective feedback. And part of what I think it was as they could describe things behaviorally. I saw that you did this. I noticed you didn't do that. And part of what I think also facilitated the corrective feedback is because the group had developed the shared mental model, it wasn't like when you work with Jen Kogan, make sure you do this. It was like everybody agreed that these were important skills. So they felt like they could give this corrective feedback. Feedback was more specific and individuals felt like they could deconstruct or break down that gut feeling or that holistic assessment. And then, we weren't anticipating this, but the individuals who did perform in this dimension training-- and we did it on skills like breaking bad news, motivational interviewing-- we heard a lot of faculty say, you know, I haven't thought about these skills in a long time. And some folks said, and I was among them, I mean, nobody ever told me to how to break bad news. I just figured it out along the way. So this rater training technique not only can help in your role as an assessor and refresh your knowledge, but we heard from individuals they actually gained new knowledge that they themselves were able to use in patient care, one-on-one with their patients. Research has also shown that performance dimension training enables the assessor to more accurately circle the right numerical rating on the form. Right now I am wrapping up a randomized controlled trial to look and see if rater training improves the quality of the narrative assessment, the actual observations. Because I don't know, at the end of the day, yeah, it's nice that somebody knows that if from a nine-point scale this encounter was scripted at a six, but what I really want to know is did you pick up the relevant behaviors and can you describe that to a learner? And so we're wrapping up a randomized controlled trial, which I'm actually shocked that nobody has done this study of like if you train somebody, can they actually do better assessments in workplace-based assessment? This takes time. And so we did a study where we said well maybe we can just bypass the whole group work thing and just hand people some frameworks. And we handed frameworks around like starting a medication and what you would need to do to a bunch of internists. And you're all internists and so people were like, where did this list come from? What's the evidence that you have to do all of these things that will make a difference? The reality is the group that kind of got to come up with the list, they came up with everything that was on the list. So in order to get buy-in, it's really helpful for the group to kind of generate the skills first and then you can give them the framework to compare. The last part of rater training to really help improve the quality of those assessments is what's called frame of reference training. So you've worked in a group, and you've said, history taking, this is what outstanding history taking looks like. So the next part of rater training is that you help people to make a judgment about how the history taking was using what's called a compare and contrast process. So you show individuals encounters of the same exact scenario, but scripted at three different levels of performance. So maybe it's breaking bad news, and you see one version where it's done expertly, one where it's pretty good, but some errors, and then you see one where it just is really going south. And so you watch different encounters and you take that framework that you developed and you apply it to the scenario. And then the group has to say, if at the end of the day we have to make a judgment about this learner, or we have to circle a number on a numerical rating form, what's the standard? And really getting individuals and competency-based medical education to shift to what does somebody need to do to provide safe, effective, patient-centered care? What are those criteria? And here are some examples of what those scales might look like. And I understand many of the programs here are using these types of entrustment scales. So the standard is either these are called coactivity entrustment scales. How much supervision, how much did I have to do while I was working with this learner? And then there's supervisory entrustment scales. This individual, if I had to watch them the next time, I would not let them do this-- EPA is Entrustable Professional Activity-- they could just watch me. All the way to I think this individual could actually supervise others. But what the group has to decide is what is it that somebody would need to do to be able to say, I don't need to be there the next time and to be able to entrust them. So you have individuals, now that they have this framework, and as a group you say, what of these skills are essential to ensure that a patient gets safe, effective care? Because that's what's going to help you to ultimately make an entrustment decision. So just to kind of wrap up and leave some time for questions, as we think about how to improve the frequency and quality of direct observation, the first part is you just got to get in the room. And I think the key strategy around that is realizing five minutes of observation in different situations in a different context really can actually give you a good picture of a learner over time. You would never want to make any type of assessment of a single snapshot, but lots of snapshots give you a picture of somebody's strengths and areas for improvement. The second part of this is this is not just about faculty development, but this is about how do you create a program culture around direct observation? And what are the steps that we have to take to engage learners in the process? And how do we improve our learner's ability to become self-regulating learners and have a growth mindset? Particularly as we, frankly, operate in very performance-oriented culture. How do you put yourself in the room to maintain learner autonomy and that relationship with the patient? So we talked about triangulation. And we talked about, whenever possible, thinking about how can we construct training to have some of these longitudinal relationships that can better foster observation and feedback? And then finally, three, to recognize that we need to stop creating better assessment forms. And if we really believe that this is important-- and I obviously think this is really important-- we have to put our time and effort into assessor training so that they have a shared mental model of what are the important skills, what are the components of those skills, and to help to move us to a criterion-referenced approach to assessment that aligns with what it is that patients need. So with that, I think there's time. I'm happy to take some questions. [APPLAUSE] [INAUDIBLE] microphone [INAUDIBLE] Mark [INAUDIBLE] I'm pretty-- ooh. [LAUGHTER] All right. Thanks so much for the talk. Those are very interesting. I especially liked the shared mental models at the end and how those can be used in terms of like a joint-- or a different kind of way of assessing behaviors. One question I have is about the actual-- the implementation, the need for [INAUDIBLE] and the reason why I'm asking it [AUDIO OUT] Has a dominating culture [INAUDIBLE] [AUDIO OUT] I also like the [INAUDIBLE] and I think it's important for [INAUDIBLE] had a similar [INAUDIBLE] A really thoughtful and insightful question. And what I will tell you that I don't know. And I sort of shared that slide about cognitive bias. What's interesting is in some of the research, not necessarily in the medical setting, is when you try to get individuals to understand either their privilege, understand the biases they may bring to it, and importantly, unconscious bias, which I think is an important part of anybody who is sort of working in teams, whether it's assessment related or not. I think we all would benefit from recognizing what are our unconscious biases, how do those influence assessment? There are some individuals who say the more you point these things out, the worse that it actually affects assessment and the ratings that individuals make. To my knowledge, there's not studies that have looked at the relationship between like dominant culture and assessment in this context. And I don't know if this is kind of answering your question at all, but I think in part, like when I think about these shared mental models, who's creating the shared mental model, I think there are opportunities to include learners' voices or a broader perspective in the process. And whether that's a patient, that's a learner, I think it can help with what that shared model is, and it's not just a single person's shared mental model. The other piece of it, which I don't know if it's quite what you're saying, but I'm thinking about the fact too, I think part of the problem in education is that we conflate feedback with assessment and coaches or judges. And I think it makes it really complicated for a learner when you're on rotations and you're getting feedback, but you're also being evaluated all at the same time. And so I think part of maybe the approach as you're talking about those things is like who are the people who are providing the feedback? Who are the individuals who have been trained to take that assessment data and think about it thoughtfully? I sit on the Resident Competency Committee, and I'm not an APD, and I think part of my role there is-- we didn't talk about inference today-- but it's sort of in the cognitive bias realm. And I think my job on the competency committee-- and like you were there-- is people would start talking about residents, and I'd be like I think we're making inferences here. Like there was a behavior, we're making an assumption about that behavior, and that behavior may be about something totally different. So I think part of it is also individuals who can think about broad explanations for things as well. Yes? [INAUDIBLE] Oh, do you want? Sorry. [AUDIO OUT] Hi. I think you hit on some very important points in trying to train someone, and especially with the patient. And I'm wondering whether our culture, in general, will change if we decide at the end of the training how to examine the physician. Like when I was brought up with the British system, they would not only give you an exam to test your knowledge, but then there would be three people actually looking at your clinical, and then two other people giving you an oral exam. So they would look at all three. The way we do it here is just to test your oral at the time of the exam. And we assume that during the three years that they've been a resident that they are competent in clinical and all that. And that might be a way of enforcing-- because I really liked exactly what you are proposing. Yeah, I think at the end of the day, we have to know that our learners can do what we believe that they can do, and that they can do what they should be able to do. And I also think it's horrendous-- and it's like shame on us-- to send learners out to practice unsupervised when we have not helped them to make sure that they have those skills that they need. I guess I feel though that what we need to do is create environments within programs. That within a program, a program can figure out how to do that without necessarily external regulation. So that when our program director has to sign off on the ABIM form that this individual, I am attesting can sit for the boards, that they are basing that off of good data. And I would hope that we can do that in a way that we don't have to necessarily have another exam and a high stakes exam that folks have to go and prove that they can do these things. That we can empower programs and give them the tools and the resources and the time that they need to make these assessments. [INAUDIBLE] the issue of direct observation and the observer, [INAUDIBLE] principle or issue about changing certain [INAUDIBLE]. I know [INAUDIBLE] just to yourself, but especially in the areas of [INAUDIBLE] And you had indicated that we are doing this in a balanced way [INAUDIBLE] different experience [INAUDIBLE] My understanding is there's fairly mixed literature on the degree to the effect of-- like the Hawthorne effect, or when I observe you, I'm going to be on my best behavior, or just that another person in the room-- I mean, any of you who have been observed, you start getting in your own head and it changes what you say and you do. One of our research studies-- in the one, we brought faculty in and we interviewed them about what it's like to do a standardized patient exam. And probably the key takeaway was like the empathy they have for learners. Of like, they started saying things. They're like, I said things I would never say to a patient. Just because they knew they were being observed and they couldn't just be themselves. I think there's mixed data in the literature about how strong that effect is. There's only so good that people can get beyond their skill. Being observed probably does change the situation. I think what is known is that the more observation happens, it attenuates the degree of the effect. And that effect is much greater if I watch you once a year than if I'm in the room with you multiple times per week. Eventually you probably get to your more authentic self, and you can block my presence out a little bit better. Thank you so much. Thank you. [APPLAUSE]