The first time I was involved in giving 360-degree feedback was early in my career in Human Resources.
Since it was part of our company’s talent development strategy, I had heard about it but had not yet been allowed to submit feedback to someone I knew.
I felt honored that my peer had selected me from among the many perspectives he could have chosen and that the feedback would benefit this leader long-term as part of their career development.
I also felt somewhat relieved that my feedback would not be tied to our annual performance review process.
While I had given this peer a lot of direct feedback throughout our working relationship, it was interesting to see how my feedback compares against that of the other folks submitting responses.
My name is Tony Tijerino, and I am an HR professional, consultant, and believer that feedback of any kind is truly a gift. As cliché as that sounds, even critical feedback can be transformative when delivered sincerely from a place of kindness.
My aim in writing this article is to explore 360-degree feedback and offer a seven-step guide for any manager or HR practitioner or manager interested in using it.
After reading this article, you will:
Understand what 360-degree feedback is and how it differs from other forms of feedback
Learn how to give and use 360-degree feedback following a step-by-step process
Identify considerations for implementing a 360-degree feedback program in your workplace
Internalize this process by seeing it in action through an example of a hypothetical 360-degree feedback session.
Once you understand the ins and outs of 360-degree feedback, you will be able to use it to enhance your organization’s individual, team, and organizational performance.
What is 360-Degree feedback?
360-degree feedback, also known as multisource or multi-rater feedback or 360 review, refers to feedback about employee performance or observed behaviors gathered from various sources. These sources include:
The recipient themselves (by self-rating)
Customers (internal and external)
Subordinates (if applicable).
360-degree feedback is most often used for managers and supervisors for leadership development. But it can be used for regular employees. Feedback is usually collected via surveys, questionnaires, and interviews.
It is also typical for managers themselves to evaluate their own performance as part of the 360-degree feedback process.
This multi-source method of gathering feedback is the primary differentiator from other feedback processes. Traditionally, supervisors are the source of feedback as part of annual performance appraisals (or performance reviews) and more frequent feedback discussions.
The information gleaned from a 360-degree feedback process is more holistic and can paint a more complete picture of the recipient’s performance and development.
Having multiple input sources also allows for a broader range of competencies to be measured based on who is being asked to offer feedback.
The other significant distinction from other forms of feedback revolves around anonymity.
Recipients of 360-degree feedback are typically given summary data of their feedback. Typically, they do not get specific information about what each rater said about them.
Some examples of common competencies performance reviews and appraisals can be based on:
Why is 360-degree feedback useful?
360-degree feedback helps evaluate job performance and is beneficial for promoting individual development and advancing organizational development.
It’s helpful for individuals to identify areas of improvement concerning their leadership and relationships with their teams, peers, clients, supervisors, and other stakeholders.
In this context, the feedback recipient is responsible for using the information to guide their development and performance.
Though organizations most commonly use multi-source feedback for manager (or high-performing talent) development, there is a trend indicating that more companies are beginning to use 360-degree feedback as part of manager’s formal performance review processes.
In these organizations, there may be more at stake since the multi-rater feedback can affect an employee’s compensation and overall standing within the company.
Organization-wide programs and initiatives can also be evaluated through 360-degree feedback. Measures of organizational effectiveness, like the success of change initiatives or managerial quality, can be measured over time by collecting and analyzing multi-source feedback surveys and by averaging manager ratings within and across departments.
This kind of aggregate data can be used to assess organizational performance as a whole, and multisource feedback can be used in group contexts to facilitate team or department-wide discussions on topics that affect overall organizational performance, such as managerial quality over time.
When aggregate multi-source data over time demonstrates improvements to organizational performance, this insight can demonstrate the overall usefulness of 360-degree feedback which has the additional benefit of embedding feedback as a positive tool into the culture of the organization.
A feedback-rick company culture, in turn, contributes to higher levels of psychological safety (when feedback is done well) and improvements in overall organizational performance and communication.
When performance and employee engagement are high in an organization, employee retention is too.
360-degree feedback: pros and cons
In thinking about 360-degree feedback, it is essential to recognize the merits and disadvantages of using it for employee development only, versus using it for both employee development and performance evaluation.
The advantages of using 360-degree feedback as part of employee development plans and career growth are straightforward and include:
Anonymity allows raters to deliver feedback freely
Developmental feedback that does not affect employment decisions (i.e., promotion, pay) contributes to less lenient raters
Confidentiality promotes psychological safety and allows recipients to feel less defensive (and open to feedback)
Enables recipients to compare perceptions of self by viewing their self-ratings was compared to the perceptions of others; this allows for self-exploration.
Though the efficacy of 360-feedback depends on the organization in question, it is hard to come up with disadvantages in using it for development only.
The following table summarizes the critical pros and cons of using 360-degree feedback for administrative uses, such as performance appraisals (and thus things like promotions and raises):
Some important employment decisions (i.e., merit, discipline, or termination) benefit from multiple sources of feedback, such as when managers/supervisors do not have frequent contact with team members they must rate.
When managers and executives must make administrative decisions using multi-source data, they may feel that this disempowers them and their ability to make decisions for their teams.
The availability of good information supports employment decisions that benefit the organization. When raters are aware of this, they may be more exact in their evaluations.
In cases where managers must discuss feedback with their own managers, they may find that it is uncomfortable speaking with their supervisors about how others see their performance. In organizations with no feedback culture, decisions based on this data may prompt defensiveness.
A standardized, anonymous 360-degree feedback process makes it less likely for raters to give biased ratings if they were approached directly by upper management about their supervisor.
Employees and managers may question credibility if raters are vested in the results of the feedback (e.g., in the case of a rater’s appraisal being affected by their manager’s). The risk of partial raters who will not give objective feedback.
In sum, there are many clear advantages to using 360-degree feedback for employee development only and keeping the performance appraisal process completely separate.
In initiating a new feedback process tied to performance appraisal, one reasonably expects hesitation and discomfort.
Based on the pros and cons listed above, I would recommend leveraging 360-degree feedback as a developmental tool and assessing its reception before making decisions to ingrain it in the performance appraisal process.
Now that we understand why 360-degree feedback is more likely to be useful for employee development, let us turn our attention to a seven-step guide on how to collect, prepare, and deliver multisource feedback to support talent development.
The process: conducting 360-degree feedback in your organization
There are two general types of feedback: Recognition (Reinforcing) and Constructive Criticism (Redirecting).
Neither is inherently negative or positive, even though one often feels better than the other.
Constructive feedback is an opportunity to learn, and recognition is reinforcement that what you know and do positively affects those around you and the organization.
The effectiveness of a 360-degree feedback program depends mainly on your organization’s whole performance management system.
Assuming your organization’s performance management process aligns with its organizational goals, and your organization’s capacity to conduct 360-degree feedback, let’s look step-by-step at how one could prepare and deliver it.
Step 1: Identify and communicate the purpose behind the 360-degree feedback process
As we have seen, organizations can use multisource feedback for developmental purposes, performance appraisal, or both.
It would be wise to allow folks to engage with 360-degree feedback for the first time in a “safe” setting focused on their development rather than being tied to compensation and other employment decisions.
Doing so allows organization members to understand better what this process will look like and enables them to become more comfortable with the process. The decision to make it part of the performance appraisal process can be piloted and incorporated over time.
This strategy, however, will only be as effective if 360-degree feedback process is well communicated. Management and HR must educate all team members on how the 360-degree feedback process facilitates professional development and if it is tied to appraisal and compensation.
All organization members should also understand the benefits of feedback more generally, understand the benefits of using a multisource feedback process for development, know what the process looks like in practice, and understand how the results from the process will be used.
A communication plan should include other important features of the 360-degree feedback process like confidentiality, anonymity, process timeline, and the kinds of things included in the feedback survey.
Step 2: Identify who will provide input
The first step in preparing to conduct 360-degree feedback in your organization or on your team is to identify the appropriate input sources.
A helpful question to yourself when identifying the suitable sources would be, “For the person receiving this feedback, who should I solicit feedback from, and what value will that feedback offer the recipient?”
Suppose leadership will use the 360-degree feedback process strictly for developmental purposes. In that case, listing the teams and individual people the recipient interacts with will support getting a holistic view of the recipient’s interactions with these stakeholders.
A helpful tip would be to establish a formal process by which feedback raters are selected to help prevent biases from creeping in. The obvious exception here includes the manager self-review, though steps should be taken to inform the recipient to attempt to self-rate as objectively as possible.
If the recipient, for example, is a Customer Experience Manager with aspirations to continue growing within the organization, valuable sources of feedback would include:
Members of this manager’s team
The Customer Experience Manager
Customer feedback (especially in cases where this manager directly interacts with specific customers)
The Customer Experience Manager’s manager
Peer Customer Experience Managers
Other individuals/departments from across the organization with whom this manager interacts somewhat frequently (i.e., sales, operations, and HR).
With an idea of where the feedback ought to come from for the specific recipient in mind, you can move to Step 3.
Step 3: Define what gets evaluated
The third step in soliciting and administering 360-degree feedback is to define the relevant areas of performance the feedback is meant to address.
These performance dimensions should be derived from current job analyses, or based on top management’s beliefs about new behaviors they want to develop and reward in the future.
A helpful question to ask yourself when defining these dimensions would be, “What behaviors should we expect from a high-performer in a given competency, and how often should we expect these behaviors?”
In the case of the Customer Experience Manager from Step 2, we might consider performance dimensions related to team management and the performance of the whole department.
We might also consider behaviors we would expect from a high-performing Customer Experience Manager preparing for an increase in the scope of their responsibilities.
It is important to note that ratings are based on evaluations (e.g., 1 = Poor Performance, 5 = Excellent performance) or on the frequency of behavior (e.g., Never does this, sometimes does this, Always does this, etc.)
Step 4: Decide on how feedback is measured
The fourth step in preparing to deliver 360-degree feedback involves the design of the
multisource feedback process.
This covers the survey’s scale format (how behaviors and performance are measured) and the availability of commentary to supplement ratings.
There are a wide variety of rating scale formats available to choose from. A commonly used format is the Likert scale, which asks for a rating on a set of performance dimensions on a numeric scale e.g. 1-5 (1—strongly disagree, 5—strongly agree).
Suppose your survey includes a scaling system like the Likert scale. In that case, the scale must offer enough flexibility for differentiation without becoming too unwieldy for respondents to use the survey.
Here are two brief examples demonstrating the importance of intentional design of the survey.
The survey may become overcomplicated using a 10-point scale—how easy is to differentiate rating someone a 7 versus a 6 or an 8?
Alternatively, the survey could become oversimplified using a three-point scale, where 1 = Low, 2 = Average, and 3 = High. This is unlikely to offer enough flexibility for the rater to distinguish between merely average performance and high performance.
I will use a 6-point Likert scale in the section “From Theory to Practice,” where I provide a sample survey and outline the different ratings.
Returning to the case of our Customer Experience Manager, let us assume a number of the items asked on the feedback survey relate to this manager’s leadership. We’ll use a 5-point Likert scale like so:
Completely disagree/Hardly Ever
Somewhat Disagree/Usually not
Neither Agree nor Disagree/Half of the time
Strongly Agree/Almost Always
Items one might expect on the survey could include:
This individual supports me in meeting my personal and professional goals (DR)
This individual provides helpful, ongoing feedback to help my performance (DR)
This individual is responsive to my requests when I need to escalate a concern (CX)
This individual takes time to support their direct peers and help mentor new managers (PR)
This individual contributes value to team meetings (MGR, PR)
This individual demonstrates accountability for their teams’ results (MGR)
The items listed above are examples of statements that could be rated from the perspectives of some of the stakeholders identified in Step 2.
The first two statements are example statements the survey would ask of direct reports (DR).
The survey could present the third statement to a customer (CX). The survey could ask the fourth statement of this individual’s peers (PR), and the survey could ask the final statements of the individual’s manager (MGR) and their peers.
Given the right technology, for example 360 degree feedback software, the feedback questionnaire can prompt each rater category (manager, peer, etc.) to select their relation to the recipient at the start of their survey.
Raters would then be offered questions tailored to their relationship with the recipient of the feedback and avoid the inclusion of inaccurate data.
There may, however, also be common areas in which the different sources can rate the same individual regardless of relation to the recipient (e.g., all sources can score the recipient on items relating to communication).
In addition to submitting numerical ratings on the feedback form, it could be helpful to include room for additional commentary or feedback. This is so raters can provide specific information leading to the score or offer feedback not measured on the survey.
The key to making comments helpful is by training raters on how to provide specific feedback geared around behavior rather than qualities the recipient cannot as quickly change, like personality traits.
Step 5: Collect and process feedback data
The fifth step in the 360-degree feedback process is collecting and processing feedback data from the selected raters.
If this has not happened by this step, it would be helpful to train raters on how to navigate the survey, inform them of the scale, and remind them of the benefits of using specific, behavioral commentary (if that is part of the survey).
You would also remind them of the confidentiality and anonymity behind the data you are collecting and its usefulness to the individual who will eventually receive it.
Once surveys have been submitted, the data can be compiled, organized, and analyzed.
One way of doing this is to provide the recipient with normative data to compare their results with summary data of other participants in the process (i.e., the average scores across the items measured by the other Customer Experience Managers who were rated).
The various feedback sources can be separated by source type and used as a basis for comparison. Knowing these differences could prove very insightful for the person receiving the feedback.
As an example, let us say one of the areas scored across all feedback sources was “Communication”.
For example’s sake, let us assume that the mean score the recipient received from the direct reports who submitted the survey equaled 3.89/5. Let us also assume that the score the recipient received from their direct supervisor equaled 1.58/5.
That is a large discrepancy and should naturally cause the recipient to question the difference in scores between the two sources, especially since the lower score came from their direct supervisor.
This kind of insight should lead the recipient to reflect on their behavior or other reasons for why their supervisor gave them a low score while their direct reports a much higher score.
It might also generate enough curiosity for the recipient to seek out additional information about the perceived difference in communication.
Assuming the low score was truly based on an objective evaluation of the recipient’s communication with them, the recipient might solicit feedback to understand from the manager’s point of view why communication scored so low.
The feedback in turn could lead the recipient to modify their communication style, frequency, or other related factor in a way that demonstrated the appropriate level of communication the supervisor expected and encourage them to modify their communication style
Qualitative data collected could either be presented as submitted via the survey, but leveraging technologies to summarize comments (i.e., in the form of word clouds, trends of words or phrases used, etc.) could also prove useful and help ensure that respondents remain anonymous.
However, the value behind the analyzed data is only as valuable as how it is delivered to the recipient and by whom.
Step 6: Deliver the 360 feedback
The sixth step in implementing and using a 360-degree feedback process involves the delivery of said feedback. There are several considerations relating to feedback delivery:
Who gets to deliver it
The shape and feel of the delivery itself
The response to how the feedback is received (covered in Step 7 below)
Mode of delivery—identifying how the feedback is organized and who gets to deliver it
The first consideration involves how the feedback is summarized to organize it in a meaningful way for the recipient. The next concern consists of selecting the person (or people) delivering the feedback.
The recipient should understand how to read any reporting associated with the feedback (i.e., if averages of scores are provided, they should be easy to read. If the commentary was collected and summarized, it should be noted to prevent confusion).
The feedback ought to be as objectively and holistically explained and internalized so that the recipient pays attention to both areas in which they ranked more highly and areas noted for improvement.
While areas for improvement ought to be discussed, it is essential to discuss the employee’s strengths and leverage this information to support them in overcoming developmental challenges that may arise from the questionnaire results.
This requires that the person delivering the feedback themselves understands the information therein and makes time to prepare for the conversation. This, in turn, will enable them to guide the recipient through the feedback as objectively as possible.
One way to do this would be for the feedback deliverer to support the recipient in identifying whether areas for improvement (i.e., the presence of undesirablel behaviors or absence of desirable ones) highlighted in the feedback summary are related to motivation or ability.
To address areas for improvement stemming from motivational issues, constructive feedback can involve the inclusion of reward to reinforce the desired behavior change. This is an example of an attempt to extrinsically motivate the recipient to behave in a more desirable.
Another way to address motivation-related areas for improvement could focus on overcoming discomfort or dislike the recipient feels towards the desired behavior.
To address areas for improvement related to ability, constructive feedback might involve training, mentorship, or other ways that encourage the recipient to learn how to behave in the desired way.
This support will help inform the recipient’s action plan (discussed in more detail in Step 7) leverage the feedback data by:
Setting learning goals
Commit to specific behavior changes
Monitoring behavior changes and learning over time to validate changes in ratings on future feedback surveys
Identifying changes in department or team performance based on links to behavior changes, growth, development, and to rating changes.
Who, then, should deliver the feedback collected and serve as this support source? Here are several potential options:
Consultant delivers feedback one-on-one
An external consultant, HR professional, or I/O psychologist can review the results with the individual. Using a consultant may help minimize perceived risk to the recipient and allow the recipient to consider the feedback in full.
Immediate supervisor delivers feedback one-on-one
This mode of feedback has the drawback of increasing the likelihood of defensiveness. This requires that the supervisor review the feedback objectively and ties the survey results to tangible steps towards improvement.
Report without guidance from another person
This means that only the recipient gets to see the results. Using an unguided report poses a risk for the individual to use the information entirely as they please— if they choose to use the feedback.
Feedback recipients can receive their results and have a general discussion on how to use the results and seek clarity on results and what to take away from them. The conversation can be facilitated by someone from HR or an external consultant. Alternatively, recipients can also have a general group discussion and have a targeted dialogue about each other’s results. In this case, everyone receives everyone else’s results.
The most significant difficulty in following these guidelines emerges because, to some degree, people tend to get defensive in the face of feedback and constructive criticism.
This leads us to the next consideration, which is the conversation itself.
The Conversation—creating psychological safety
As mentioned earlier, feedback reinforces desirable behaviors and dissuades an individual from repeating undesirable ones.
For feedback to be effective, recipients of feedback have to be receptive and willing to hear the feedback, internalize, and commit to an appropriate response.
To deliver feedback effectively, the person or people providing it must create a psychologically safe environment that supports a positive conversation that leads to positive outcomes.
Psychological safety is the belief that individuals will not be punished or humiliated for sharing their ideas, questions, concern, or mistakes.
As the Center for Creative Leadership puts it, in thinking about the workplace in particular, psychological safety is “…a shared belief held by team members that others on the team will not embarrass, reject, or punish you for speaking up.”
One helpful principle for anyone in a position to give feedback is to internalize the idea that feedback is beneficial to enabling others to learn and grow.
Doing so makes it easier for the deliverer to offer feedback.
It also makes it easier for the recipient, provided they know the deliverer has their development in mind, to feel less defensive and more receptive to the feedback.
Without going into more detail about psychological safety, here are a few valuable tips and techniques for having an effective feedback conversation regardless of who reviews it with the recipient:
Reviewing feedback with someone (360-degree and otherwise) involves a conversation and dialogue. When the recipient responds and asks questions about their results, it is vital for the person reviewing it to listen carefully and ask clarifying questions as necessary. Active listening serves the dual purpose of ensuring the deliverer understands the individual correctly and assures the recipient feels heard.
Pay attention to non-verbal cues
These carry as much information as what is being said during the conversation. The deliverer needs to monitor their reactions and non-verbal cues in addition to those of the recipient. Closed-off body language may indicate that the recipient does not like what they hear, which can prompt the deliverer to inquire about their reaction and understand the recipient’s perspective on the feedback results.
The deliverer ought to paraphrase what they hear throughout the feedback conversation again to ensure they understand the recipient correctly when they respond and also helps ensure the recipient feels heard.
Deliver results impartially
Again, the deliverer must focus on the survey results and focus on describing behaviors. It is also crucial that the deliverer review both “positive results” (e.g., high scores indicating a strength) and “negative results” (e.g., lower scores showing a need for improvement).
Doing so ensures that the feedback is balanced, for it is as essential to recognize and reinforce desirable behaviors and results as it is to correct undesirable ones. This was briefly covered in the Mode of Delivery section as the first part of Step 6.
The final step in the 360-degree feedback process involves action planning. Action planning must begin during the conversation, and the leader must schedule a follow-up in the future to measure progress.
Step 7: Support the recipient – action planning to improve the value of 360-degree feedback
The final step in administering 360-degree feedback begins during the feedback conversation and involves checking in with the recipient following the initial feedback review session.
This step is about supporting the recipient and enabling their development.
Before the feedback review session is complete, the recipient ought to commit to act using the new data.
Throughout this article, we have focused on using 360-degree feedback for developmental purposes (rather than performance).
Therefore, it is imperative that the recipient co-create and commit to a series of actions supported by the recipient’s team, manager, and organization that aid their professional development.
Here are tips and techniques one can use to support multisource feedback recipients in making progress towards their commitments to growth:
Collaborate on next steps
The deliverer of the feedback ought to work with the recipient on identifying the next steps based on the collected feedback data. This can include:
Planning for future check-in meetings to discuss progress (with any of the relevant stakeholder groups, as appropriate)
Identifying sources or opportunities for the recipient to learn more about and practice behaviors and skills they would like to gain or strengthen
Having the recipient explain specific ways they intend to make the relevant adjustments
Identifying timelines and milestones to measure progress.
Checking in on how the recipient feels at the end of the session
The deliverer of the feedback should inquire about the recipient’s emotional reaction towards the session’s conclusion and their feelings towards the action items themselves. Do they feel that the agreed-upon next steps are feasible? Do they feel motivated to work toward them?
If not, ask them what is behind their hesitation and ask about any additional support they might think they need,
Summarize key takeaways from the feedback session and action items
Have the recipient help you summarize the actions and timelines they have agreed to. It would be helpful for this information to be made readily available to the recipient and any relevant stakeholders (e.g., their direct supervisor) for whom it makes sense to follow up on these action steps. An HRIS system with this functionality would help secure the survey data while offering easy access.
The support does not stop here. The relevant parties (most likely the recipient’s manager or mentor) must follow up according to the agreed upon timelines discussed at the end of the feedback session.
If the recipient committed to having a check-in 30-days following the feedback session with their manager to discuss their progress or learnings, then the manager and recipient must prepare for and have that meeting to do so.
Let’s take a look at an example. Consider a case in which an individual committed to a 30-day check-in after the original feedback session.
Within those 30 days, the recipient agreed to complete a set of e-learning modules related to building skills in an area identified in the survey as a weakness e.g. presentation skills.
They also agreed to begin practicing what they learn by signing up to facilitate a couple of presentations about a relevant topic, or lead a department meeting where they will serve as the primary facilitator for that meeting.
Following each presentation or meeting, leadership can send out a survey to the attendees to rate the forum’s effectiveness and rate the presenter on relevant criteria (i.e., engaging presentation, preparedness, perceived confidence, etc.).
This survey information is held by the individual’s manager to be examined during the 30-day follow-up meeting.
At this 30-day check-in, the manager would ask the individual about their progress over the past 30-days related explicitly to the agreed-upon developmental areas they were to work on during this time.
Here are some topics and questions the manager might discuss or ask about during this meeting relating to the details outlined above:
What changes has the individual noticed concerning their presentation and facilitation skills over the last 30 days?
How challenging did the individual find it to complete the agreed-upon action items? What was the hardest part about meeting these goals, and what came more easily?
How does the individual feel about their progress over the past 30 days?
A review of the post-meeting/presentation survey data
The manager here might share data from the post-meeting surveys soliciting feedback from attendees on how the individual performed across the relevant dimensions being measured. If the individual completed multiple presentations over the 30 days as they learned how to present more effectively, is there a trend in the survey data indicating positive change?
If that is so, then be sure to recognize the individual for their improvements to reinforce their behavior changes and encourage them to continue their developmental work
What positive team or business results have emerged from the improved behaviors over the last 30 days?
30 days may not be enough time for substantial change, but during this session the individual and their manager would once again discuss next steps as part of the individual’s development journey.
One useful tip to consider might be to send out a pulse survey to the stakeholders involved in the original 360-degree feedback endeavor.
To measure progress, soliciting this feedback a few months down the road (perhaps 90 – 180 days from the original feedback session) could prove beneficial to the person who received the feedback. It would consider the various sources’ differences since they initially submitted their feedback.
It would still be important for the manager to check-in periodically to offer feedback and on-the-spot insight as relevant, and would serve as a way for the recipient to feel supported in their developmental journey.
360-degree feedback example
We have now covered 360-degree feedback and its potential uses in great detail and reviewed a 7-step process for preparing for, conducting, and administering multisource feedback.
Here is a summary of what we have covered:
– Feedback is the information people receive about their performance; in the workplace it compares one’s performance/behaviors to job expectations – 360-degree feedback is information about one’s performance or behaviors that comes from multiple sources
– Dimensions of performance that get evaluated should come from job analyses or behaviors top management wish to see within the organization – Areas that get evaluated should be relevant to the individual who is being evaluated and relevant to their developmental plans
– 360-degree feedback is often collected using a survey, but other methods exist (e.g., interview sources of feedback) – If using a survey scale, like a Likert scale, select a format that is simple enough to avoid overwhelm but thorough enough to allow for differentiation of scores
– Based on considerations in the above step, the survey ought to be developed and given to the selected sources to complete – An important consideration is to train respondents on how to complete the survey (i.e., ensure they understand the different ratings, reminders about anonymity, etc.) – Consider if (or to what extent the report will include) comments in the survey and how that data will be summarized once the data collection process is complete
– Identify who will deliver the feedback and ensure they are prepared to do so in as objective and impartial way as possible – A crucial part of effectively delivering the feedback requires that the recipient feels a high level of psychological safety; this makes them more receptive to the feedback and reduces potential defensiveness
– It is essential that the recipient feel support in creating actionable steps to improve in the relevant ways based on the feedback data – This starts during the initial feedback session but must continue over time as the recipient works towards the milestones, projects, and learning opportunities. – A pulse survey may be a valuable tool in evaluating behavioral and performance changes over a slightly extended period (e.g.,6 months). The pulse survey solicits feedback from the same or similar sources as the original survey and helps measure change over time
Let us now turn theory into practice. The rest of this section will include the following:
A sample 360-degree feedback survey
A sample of a feedback session reviewing survey data from multiple sources in dialog form using some of the techniques from Step 6 and Step 7.
Before you view the examples below, I will again provide context for who this survey is for, what the survey is intended to measure, what sources were selected for input, and offer brief explanations for why these choices were made.
The following 360 feedback survey example and dialog is of a Customer Experience Manager seeking to grow within the organization (as described in the steps above).
The survey will measure this manager on a series of core competencies identified as critical to the organization’s success.
Prospective sources from which to collect this feedback could include direct reports, direct supervisors, peers within the same department/role, and other organization members with whom this manager frequently interacts (i.e., HR, sales, etc.)
This particular sample survey is designed to be filled out by a colleague to the Customer Experience Manager being evaluated.
As mentioned earlier, multiple versions of a survey could be created for the different stakeholders assessing the individual in question (i.e., surveys for managers, direct reports, etc. may have differently phrased questions and measure information that the group would be privy to).
Example multi-rater feedback Review conversation
Based on the sample survey items for the Customer Service Manager (Carl), the following is a brief outline of a 360-degree feedback review session between the CSM and their direct manager (Diana) following their exchange of greetings.
This snippet of the dialog will focus on what is visible from the sample survey and the feedback from Carl’s peer CSMs.
Please note how both Diana and Carl offer open and honest feedback and take note of the variety of open-ended questions Diana uses in coaching Carl to help him improve his team’s performance concerning empowerment.
All right, Carl, we’re now turning to the feedback you received from your peers, your fellow CSM. Are you ready to take a look at these with me?
Thank you, Diana; yes – I’m eager to know how I’m doing a year into my role!
Awesome, then let’s dive in. Similar to the other results we have already covered, you were rated on a 6-point scale where 1 was the lowest score, 5 was the highest, and “N” meant that the respondent didn’t have enough information to answer the given question sufficiently.
Let’s start by comparing your self-ratings with the mean scores your peers gave you on these items. We’ll also look at how you compare against the rest of your colleagues based on the scores they received from you and their other peers.
Overall, your self-rating didn’t deviate far from how your colleagues rated you. The most significant difference was the question about fostering an inclusive environment. Still, even then, your colleagues scored you relatively high in that area.
Your lowest score was on the question about team empowerment where you rated yourself at 4 and the mean score you received from your peers was 3.6. What thoughts come to mind as you look at these numbers?
Well, I see that it’s not too far off from my self-score, but now I’m wondering about what I could be doing better to empower my team.
On average, I also see that my peers scored higher on that particular item. Aside from that, though, I scored more highly on the other items than my colleagues were. I wonder where the disconnect is related to empowering the team and what my colleagues are seeing?
I noticed that and want to recognize you for your efforts to foster excellent communication on your team and with your peers and create an inclusive environment. Since you are the newest CSM, that is something to celebrate.
What are some things you do when communicating with your team, peers, or even me to validate your understanding that you think contribute to your rating?
I have a habit of asking throughout most of my conversations with my team and colleagues something along the lines of, “So what I’m hearing is…is that correct?” And usually I get an affirmative response, but if I’ve misunderstood something, the person I’m speaking with has an opportunity to correct me.
Yes, I have noticed that you do that consistently even when we speak. And it seems that you do this successfully since you scored highly here in looking at how your team rated you on this item and how I rated you. So keep that up; you’re doing a great job of ensuring folks are on the same page.
Let’s chat a bit more about team empowerment. To what extent do you think your team feels empowered to make decisions on behalf of our clients per our values and procedures?
If I’m being honest, they escalate concerns to me more often than I would like, even when they could have easily handled the situation themselves.
Okay, let’s lean into that a little. What do you think is behind the higher number of escalations you get from your team? When you get those escalations, how do you typically respond?
Usually, I spend a little time hearing what the team has to say about the situation but I don’t want to keep the client waiting. So more often than not, I handle the escalation myself to ensure the client is happy and then I proceed with my own tasks.
I understand; then, when was the last time you followed up with one of your team members about an escalation they could have handled, and how did that conversation go?
Diana, now that you mention it – I can’t say that I’ve ever followed up in that way with the team. If I understand your question correctly, you’re asking about how I challenge the team on what they escalate to me?
That’s precisely right! Realizing that now, what do you think are some things you can start doing or do differently to reduce the number of escalations you get and, in doing so, empower your team?
I can think of a couple of things I can start doing right away. I can start tracking the number of escalations I get and where they come from to ensure I have those follow-up conversations. I usually get escalations from the same handful of team members, so I think that could go a long way.
That sounds like a great way to start! What else did you have in mind?
Another thing I can proactively do before I get escalations forwarded to me is to either chat about our values and best practices for navigating client concerns in an upcoming team meeting or during my one-on-ones with the team. I wonder how many of them are fully aware of the updated accesses. Some of them might not realize that by escalating the call to me, they might be frustrating their clients, who then have to wait for the escalation to reach me. This dissatisfies our clients when their concerns never needed to be escalated in the first place.
Good! I think that’s a great idea and would serve as an opportunity to reset expectations with the team and ensure everyone is on the same page. I know some team members have been part of the team for a while and may not realize that there is updated guidance that makes the client experience more streamlined.
Have you noticed any other things your peers do to empower their teams that you might adopt? And what are some different ways you think might help make your team feel more empowered to take care of the client without asking for permission or support?
I’ll have to think of some additional ways to help make the team feel more empowered, but I think that is probably why my colleague Rose started sending a weekly newsletter to her team. In it she includes updates to relevant policies, team metrics, recognition, and other information that probably supports her team in getting information quickly and efficiently. She also seems to have more frequent touch base meetings with her team which is something I can try with my team.
I’ll make it a point to ask my colleagues about some of the other things they do to help their teams feel empowered to handle their workload.
These are all great insights, I think you’ve already got a few great ideas to try, and I think you’ll get a lot out of chatting with your peers about what they do to empower their teams.
When would you like to check in about this and tell me about your findings?
I’ll reach out to my peers today to get some time on their calendars and start picking their brains. I also like sending regular communication to the team, like Rose, and I will see if she’s willing to share her template with me.
So I think it’d be great to check-in in the next three to four weeks! I can tell you about what I learned from my colleagues, the conversations I had with my team to update them on our values and policy change, and any follow-up conversations I have when I receive an avoidable escalation.
That sounds like a plan to me! Let’s schedule a check-in for the end of the month, so you have plenty of time to get the information you need and have those conversations. We can also chat more about any other developmental areas you’d like to focus on during that conversation then. In the meantime, feel free to reach out if you need anything at all but remember you’re doing a great job.
I look forward to hearing about your findings at the end of the month!
Now it’s your turn
With the seven steps, sample survey form, and exmple dialog, you should now be ready to roll out 360-degree feedback within your organization!
Sometimes giving and receiving feedback can be tough but it does not have to be.
360-degree feedback is a great way to support teams in working on specific developmental areas and doesn’t have to be tied to your formal performance appraisal process, merit, or dictate other employment decisions within your organization.
This guide should serve as a sturdy foundation for a great addition to the talent development processes you already have in your organization.
Check out some of the links below to learn more about how to approach feedback and performance management to improve your organization and boost employee engagement.