If You Are Talking Value-Based Payments Without a Social Worker, You Are Doing It Wrong

At least that is what Don Lee, one of the organizers of the Value Based Payment Forward Conference, thought when he invited me out to cover it for my blog and social media. For those of you who are not familiar with Valued Based Payments (VBP) here is a quick definition…

Value-Based Payment (VBP) means a strategy that is used by purchasers to promote quality and value of health care services. The goal of any VBP program is to shift from pure volume-based payment, as exemplified by fee-for-service payments to payments that are more closely related to both quality and cost outcomes.

https://www.lawinsider.com/dictionary/value-based-payment-vbp

Rather than getting reimbursed per face to face visit you get reimbursed for the overall outcome of an assigned population. This is shifting the way healthcare and mental health is being delivered. The emphasis of the conference is that this method of payment is headed right for us and we need to be prepared. Previously hypothesizing how VBP’s might help with crisis mental health services, I was eager to spend two days in Buffalo, NY to take a deep dive into the topic. Here are themes that emerged..

Data, Data, Everywhere

I can’t even begin to count the amount of times I heard the word “data”. The emphasis on data becomes especially relevant when you try to assign a monetary value to it. Why should a health organization think an social intervention have “value” to begin with?… You guessed it…. It starts with data. Having a “data informed” intervention is not only key for measuring outcomes, it becomes imperative to the development of a value-based payment…





Similar to grant development, organizations need to think strategically about how they can use their data to fund unmet needs. Nonprofit organizations can not rest on data of simply describing numbers of persons served, they have to demonstrate that services can solve a specific healthcare problem. For instance if you are a homeless shelter entering a value based payment agreement, you need to not only need to think about the number of people housed but the potential health outcomes as a result. Easy enough…. right??

Who Gets To Define Value?

Things get messy here and quick. There were strong feelings about who gets to assign value to the importance of the data collected. Is it the Insurance company? Is it the nonprofit attempting to enter a value-based contract? Is it the communities themselves? Is it the hospital or small to medium size health system? Is it patients/caregivers?

This results in the pursuit to align all the stakeholders in an organized manner to define value. Again not so easy…



Attempting to align hospitals, payers, patients, providers, and community based organizations about what a “valuable” outcome is requires tough questions. This can be a challenge but it my contention that non-profits and community based organizations (CBO’s) can bring a ton of value to healthcare to improve outcomes. Presenters offered some frameworks on how to better define value…




The road to defining your value as an organization can be a messy one. But connecting these tough questions to the outcomes data is key. You can create a compelling outcome that your data my improve health outcomes. That introducing intervention X can save the health system Y amount of money. For instance a homeless shelter can demonstrate that providing a insurance member with housing was cheaper than an inpatient stay. However there is one last even more complex hurtle…

Who Defines “The Problem”?

Let’s keep going with interventions for homelessness. Payers may view the cost savings as a “win”. A hospital system might not care as they need the inpatient stay to generate revenue. The endocrinologist that entered into the value-based payment might not care as it may or may not improve the patient’s diabetes. Not only is a clear definition of value difficult to define but the problem(s) in which the intervention is trying to solve needs to be aligned.

This is were I think social work can play a role. We are problem solvers and system thinkers. We are trained to assess complex systems (whether it is family or communities) to produce a positive outcome.

If the Value Based Payment Forward Conference is any indication, we are in for some complex times in healthcare reform. We need to think not in silo’s but in partnership in how we can better serve those in need. As presenter Dr. Alisahah Cole of Atrium Health pointed out, we continue to understand data about underserved populations we have to ask tough questions…


Where healthcare reform takes us is going to require tough answers to even tougher questions. It is going to take innovative collaborations to improve outcome, define value, and more importantly solve “the problem”.

For more tweets and insight from the conference check out my summary via Wakelet:

“Big Technology” and Their Responsibility for Suicide Prevention

From how we shop to how we receive healthcare, technology is having an impact on the decisions we make. As we rely on technology for our decisions and social interactions, companies like Amazon, Google, and Facebook are examining large amounts of data about us. This also helps them make decisions about advertising and user experience. As time goes on stewardship around this data is proving to be an increased responsibility for them.

They have more information about our habits and increasingly more details about our direct thought process. We are typically searching Google on something for a reason. We often post things on social media asking questions and frequently looking for support. These searches are benign when were are searching for groceries but when someone is deep emotional pain, this presents a host of ethical issues these companies should be considering.

Recent attention has been paid to Facebook’s response to those posting suicidal content and those who have livestreamed a suicide attempt.

These questions lead to complex problems for users and the companies. Big technology companies like Facebook have a unique opportunity to do something about suicide. From a public health perspective they control a lot of data that can be helpful but also can also be harmful. Facebook is an interesting “case study” as it has been the most publicized about this issue. But as social media and other companies begin to “understand” us more, they will have a responsibility to take action. Here are the points they need to be mindful of.

Assessment

Prior to taking action on suicidal content a careful assessment is needed. When Facebook is determining suicidal risk, how are they doing it? Over the summer Facebook gave a sneak peak into how they are using AI to assess suicidal risk. They appear to using a mix of AI and the humans to come behind and check. I found this explanation both worrying and reassuring. It is good to take steps to use tools such as Natural Language Processing and AI.

The science behind this is new but just as Facebook is consulting with The National Suicide Prevention Hotline other technology companies should be consulting with experts in the field. As suicidal ideation becomes more of a “risk” that big technology assumes, reaching out to research organizations such as the American Association For Suicidology is key. The “knowledge” gained from a large corpus of data is only as good as the people interpreting it. When assessing suicide nuance is key. Ensureing that AI understands this nuance or there is a way to manage this nuance (such as passing it to a human) is critical.

Dealing with concerns such as false positives and misinterpreting signals is going to be key but also how are we presenting the information to the user is important . Once you have assessed risk, what does experience look like to the user?

Intervention and Consent

This is a larger concern on the user side and I would argue that engaging with both users and professionals is key. The question I always ask about technology and mental health intervention is how it is any different from a face to face contact?

Consent is required for mental health treatment often requires consent (will get to the exceptions in a minute) but in the low to medium risk categories as face to face contact would require consent then some education and intervention. This is the key to the user interface of these platforms. To the extent big technology firms interested in this work can push consent, information, and minor intervention is key. Little nudges of “It looks like you are having a hard time, can we connect with ______ crisis service” or “It seems like you are depressed… do you need ______ …”. An additional layer of “because you said _____ we are going to ____. ” To convene users and professionals on what they would like the experience to looks like.

So here is where things get messy… and fast. What should you do when an algorithm decides you are high risk? When face to face, clinicians often have to make the decision that a suicidal person might require police intervention. This is a decision that should not be taken lightly. This process has nuance as well as laws and interventions vary on the state and local level. Not only that but one has to consider the training of the officer that is taking the call. Tech companies are going to have to get familiar with the “grey” complexities of deeming that someone is a danger to themselves.

There are obvious cases of someone recording an attempt or making it clear on a post, however there are many things that infer risk. In the above article about Facebook, Dr. John Torous warns of “practicing black box medicine”. I would agree that having a algorithm make a decision without informing users is not demonstrating consent. For tech companies interesting in tackling suicidal ideation in real time, there decisions shouldn’t be hidden behind a box. For an issue like suicide the user interface should attempt to mimic a face to face contact. That intervention should be done in partnership with the user, local authorities, and crisis services. This infrastructure is not easy to build but organizations like the National Suicide Prevention Lifeline and Crisis Text Line want to partner with you. You can find out about partnerships with Crisis Text Line here and learning more about the National Suicide Prevention Lifeline’s network.

Data Governance/Privacy

The next concern is what happens once data is collected. Keep in mind these companies have a large amount of our data but are not healthcare companies. Despite how should this data be governed and How are individuals privacy protected?

This this was an interesting way of framing this question. Should tech companies scanning our risky behaviors be held to the same standard as “medicine”. If they are going to provide symptom education and “intervention”, should they be held to the standards of health privacy laws such as HIPAA? If not those standards how can “big tech” best protect privacy. How long should information be stored? Can data be de-identified after so that companies can still “learn” from them?

These are critical questions are central to this debate. There are no easy answers to this but again there are questions of these decisions being made in a “black box”. That companies dealing with large amounts of data about your health should be transparent about how they are using it. Many argue that individuals should be compensated for their data if companies are going to “learn” from it.

Where Do We Go From Here

Large technology companies have an opportunity in that they have immense of amounts of data. From the public health perspective they can make an impact on health issues like suicide and other diseases. With this opportunity comes a responsibility to users to protect their rights and privacy. Being in the unique position to intervene with suicide in real time is critical work.

Tech teams need to work with practitioners to determine how this real time intervention is any different from face to face intervention. More importantly ask users how they would want this experience to look. To ask challenging questions on how to best serve the public while having ownership of personal health data. I hope that technology companies continue to ask these challenging questions. Not only that but provide the answers to the users and society at large.