Teaming Data –> RootNPI

The DocGraph / CareSet team does great work and I have personally benefited from the availability of their original CMS teaming data, even using it in a chapter from my dissertation.

They recently updated their methodology and created a new group of datasets they call “root NPI“. Along with this update, they will no longer be updating the original format teaming data. While I understand the need for this change, the fact that they have neither updated the original data, nor retroactively created the new RootNPI data (beyond 2014) is a problem for me as I use the time variation in these datasets and would like to be able to add years.

To get around this limitation I created a method, to a fairly close approximation, creates the new data sets from the old, and therefore allows me to perform analysis on data from 2009 to 2015. The idea is to take the 180 day files and make them symmetrical. My commented SAS code is here, but the main commands are:


/* Duplicate the teaming data, switching NPI_Number1 and NPI_Number2 */
DATA phy_ref_2014_180_x2;
SET ref_med.phy_ref_2014_180(rename=(NPI_Number2=NPI_A NPI_Number1=NPI_B))
   ref_med.phy_ref_2014_180(rename=(NPI_Number1=NPI_A NPI_Number2=NPI_B));
run;

/* Choose the NPI pair with the most patients */
proc sql;
CREATE TABLE npiroot_2014_180
as
SELECT NPI_A, NPI_B
   , MAX(Bene_Count) as patient_count
FROM phy_ref_2014_180_x2
GROUP BY NPI_A, NPI_B
; quit;

Complete data exist from both data sets for 2014, which allows me to compare the effectiveness of my transformation method. Here are some statistics from the comparison:

  • For the pairs that match, there is a very high correlation between the two (0.97938, see scatter below)
  • While 37.5% of the pairs do not match (50m of 185m), these pairs only account for ~10% of the total number of shared patient connections (~800m of ~7.5B)
  • It looks like most of the missing connections happen right near the 11 cutoff.
    • In fact, for 30% of missing pairs the present pair has a patient count of 11
    • It is 19% for 12, 13% for 13, 9% for 14
    • 90% have fewer than 22, 95% <33, and more than 99% fewer than 100
  • There seems to be a decently large, non-random set of providers that are in the new data, but not in the old. They all seem to be medical device related. Here are the top 10: Arriva Medical, Degc Enterprises, Lincare, All American Medical Supplies, United States Medical Supply, Med-Care Diabetic & Medical Supplies, Ocean Home Health Supply, Binson’s Hospital Supplies, Passaic Healthcare Services, DJO
  • There is not a similar pattern for providers in the old data, but not in the new. For consistency across years, I will probably exclude the above set of providers from my analyses.
  • Here is a comparison of the two datasets in terms of number of patients (the strength of the connection (RootNPI vs Constructed)
    • Median: 21 and 21
    • Mean: 41.6 vs 43.4
    • 95 percentile: 128 vs 134
    • 99 percentile: 347 vs 378
    • Standard Deviation: 104.5 vs 116.6
  • It may seem odd that the constructed data set has a larger average than the new, Root NPI data, since the new one is using the full year to define a connection, while the old data set used the 180 day window. I think what accounts for the discrepency is the fact that the old data set included connections that happen 6 months after Dec 2014, which the RootNPI omits.

Finally, to ensure that there is not any odd systematic variations between the two measure, I created a scatter plot comparing the patient count I calculated from the original teaming data with the new RootNPI patient count. I truncated the plot at 1000, both because there are just too many obs <1000 and because I am mainly interested in what happens to the relationship as both get large.

To me, this looks really reassuring. The two measures seem to be similar, with some noise, and this noise appears to get smaller as the number of patients gets larger.

Mendeley – Fixing Author and Journal Metadata

I use Mendeley to store, organize and manage my library of academic papers. It’s tagging and search features are excellent, and Mendeley helps keep the process of finding previously read literature manageable. At times I hear or see an author’s name and wonder what papers of theirs I have previously read. In Mendeley, you can select and view all of the papers by a single author quite easily. However, there is an issue with variations in names. When you filter by author, Mendeley has no way of knowing that “Mark Pauly”, “Mark V Pauly”, “MV Pauly” are all the same person. To fix this using the interface would be a painful, manual task involving selecting and editing each individual paper.

Fortunately, Mendeley stores the local library information on a relatively easy to access SQLite database, and I know SQL. What I did, and will show, was to find probable duplicate names and merge them together using SQL code.

Step 0: BACK UP YOUR DATABASE!

The main databases are stored in local AppData folder (“C:\Users\[Your_Windows_UserName]\AppData\Local\”) under “Mendeley Ltd\Mendeley Desktop”. What you are looking for is a file similar to “dl679@cornell.edu@www.mendeley.com.sqlite” . BACK THIS FILE UP BEFORE PROCEEDING.

Step 1: Get and install a program to read and write SQLite

There are a variety of tools out there, but I used, and can recommend, SQLiteStudio (https://sqlitestudio.pl/index.rvt).

Step 2: Connect to the database.

Open up SQLStudio and add the databse you located in step 0, then connect to the database (2). Once you have it open, you should be able to see the tables that Mendeley uses. The main one is “Documents”. This table has one entry for each of the articles in your database. The authors are stored in “DocumentContributors”.

Step 3: Find suspected duplicates

The following code generates a good list of authors that are probably duplicate entries.


SELECT DocumentContributors.LastName, DocumentContributors.firstNames, count(Distinct documents.id) as Num_Papers, max(Num_Papers_LN) as Num_Papers_LN
FROM DocumentContributors
INNER JOIN documents
on DocumentContributors.DocumentID = documents.id
INNER JOIN (
   SELECT LastName, count(Distinct documents.id) as Num_Papers_LN
   FROM DocumentContributors
   INNER JOIN documents
   on DocumentContributors.DocumentID = documents.id
   GROUP BY LastName
   ) as ln
on ln.lastName = DocumentContributors.lastName
GROUP BY DocumentContributors.firstNAmes, DocumentContributors.LastNAme
HAVING Num_Papers_LN > 8
ORDER BY Num_Papers_LN desc, DocumentContributors.LastNAme, Num_Papers desc;

I limited my list to authors (by last names) with more than 8 papers, as I did not want to spend too much time cleaning up smaller authors.

Step 4: Identify and update duplicates

Once I have my list I check to see if there are more than one author with the last name. The idea is to create a search string that uniquely identifies an author, than standardize the first name. For example, I searched for:


SELECT *
FROM DocumentContributors
WHERE lastName = 'Town'
and firstNames like "%";

In my database, all of the authors with a last name “Town” were in fact “Robert J Town”, so I could safely run the following update statement to standard the first name. I chose to omit periods, and use the first initial of the middle name, but your standardization procedure may differ:


UPDATE DocumentContributors
SET firstNAmes = "Robert J"
WHERE lastName = 'Town'
and firstNames like "%";

If there are multiple authors with the same last name, you can include a first initial before the % to filter out based on that. Always check to see what is going to be impacted by your query before running an update statement.

Step 5: Make Mendeley update the search index

Finally, to get Mendeley to update the search index (including the index of authors), with Mendeley closed, delete the files in “…\AppData\Local\Mendeley Ltd\Mendeley Desktop\www.mendeley.com\dl679@cornell.edu-3ab2” (your final subfolder will differ). BACKUP THESE FILES FIRST. Deleting (and having Mendeley rebuild) this file could very well speed up search if it has become slow.

One caveat, this approach will not update the information in your library on Mendeley.com, and will not sync to other computers. I think this can be accomplished by updating the “eventLog” and “eventAttributes” tables, but I didn’t have the time to write up a sufficiently automated process, but think something could be done fairly easily using Python.

Step 6: See which authors you read the most


SELECT DocumentContributors.LastName, DocumentContributors.firstNames, count(Distinct documents.id) as Num_Papers
FROM DocumentContributors
INNER JOIN documents
on DocumentContributors.DocumentID = documents.id
on ln.lastName = DocumentContributors.lastName
GROUP BY DocumentContributors.firstNAmes, DocumentContributors.LastNAme
HAVING Num_Papers > 5
ORDER BY Num_Papers desc;

A side benefit of this project is that I can see which authors are particularly important to my research by running the above code. Not surprisingly, my advisor’s advisor, Marty Gaynor, tops the list (along with the very prolific Larry Casalino). Amitabh Chandra, David Dranove, Bruce Landon and Robert Town are next up (can you tell I research health economics!?).

I also used a similar process to clean up the journal names, and then checked which journals I read most often. Health affairs topped the list followed by NBER Working Papers. There is a big gap between the next couple: Journal of Health Economics, Health Services Research, Journal of Economic Perspectives and American Economic Review.

Olympics 2018: Medals Recap

Some thoughts on the results of the 2018 Olympics:

Most pundits agree that this was a disappointing for Team USA. The haul of 23 medals is 5 fewer than 2014 and 14 down from 2010. The USOC had set a target of 37, with the expectation of at least 25, and the hope of up to 59. This decline is total medals is more severe when you consider that many of the medals were in US friendly events that previously did not exist (11 from snowy pursuits snowboarding and freestyle skiing). The USOC walked away with the same number of gold medals they have received since 2006 – 9, with 2002 only being one higher.


However, despite the shortfall in total medals, Team USA did have some notable victories: First, the women’s hockey team. As someone who attended two schools where hockey is the major sport (Colgate and Cornell… go Colgate!) I really enjoy hockey. I watched the utter heartbreak of the US women’s 2014 loss, made extra difficult by the fact that in the Olympics there is no “get ’em next year”. It’s get them in 4 years… This year the gold medal game lived up to the hype, including the final outcome. Second: the improbable victory in curling beating out both Canada and Sweden (which is another recent event predicted by the Simpsons more).

Here are some assorted notables:

  • Norway, Germany and Canada had great Olympics.
  • For Germany, it is return to form, after a bit of a slide from 2002-2014.
  • For Canada, it demonstrates that 2010 was not just a fluke and their rise to prominence is likely here to stay (unexpected losses in both Men and Women’s hockey, and Curling aside).
  • A historical note: It is crazy to think that in 1988 the sum total of golds for Canada, Norway and the USA was 2 (4% of the total). Lately it’s been around 33% (including this Olympics)
  • Russia/OAR dropped down to 2 golds. Some of it was undeniably the ban, but their haul of 13 in Sochi was a bit of an anomaly. They took home 3 in 2010 Vancouver
  • South Korea did not seem to receive much of a hosting bump. Their gold medal total was between their 2014 and 2010 count, and their total medal count has been steadily increasing since 2002.

Here is graph of the total medal count over time:

And here is the gold medal count over time:

1. The other prediction being the Trump presidency, which they predict will be followed by Lisa Simpson. Interestingly, Ted Cruz just argued that Lisa is a democrat.