Expected Wins

I am third in points scored, but 6th overall and in real danger of missing the playoffs. Digs SdLn TD UnBlvbl has had a historically productive season, but somehow has four losses and may not get a bye. There are two things that impact turning total points into wins: who you play and how your points are distributed. For example, Digs would have rather used their nearly 200 point week either last week against NotGonnaLie or the week before against the Pack. It was a bit overkill to use it against a team that barely broke 100.

Using pythagorean expectation, I calculate each teams expected wins (E(W) ), which captures how many wins we would think that would get based on the points they scored and the points their opponents scored. The difference between these the bonus (or loss) from the timing of when points are scored vs when the opponents’ points are scored (Opp_Δ).

I also do the same calculation using an the average points scored (instead of actual opponents) (E(W_1)). The difference between these two captures the bonus or penalty from the set of opponents faced by the team (Time_Δ). The combination of these two factors in a sense captures the team’s luck factor (Tot_Δ).

Observations after the table. Note: The table is sortable.

Div Div Rank Team Avg Pnts Opp_Pnts W E(W) E(W_1) Opp_Δ Time_Δ Tot_Δ
icon S 1 Pack Attack 142.91 122.32 10 8.29 8.10 0.19 1.71 1.90
icon S 3  Dryden Jets 129.26 126.54 7 5.92 6.33 -0.41 1.08 0.67
icon S 4  IncidentalPunishment 119.37 119.43 7 5.49 4.77 0.72 1.51 2.23
icon S 9  The Wrecking Broncs 117.43 127.21 4 3.96 4.45 -0.49 0.04 -0.45
icon S 10  North by North Wentz 114.59 124.76 3 3.87 3.99 -0.13 -0.87 -0.99
icon S 11  fourth and long 108.42 127.79 2 2.58 3.04 -0.47 -0.58 -1.04
icon L 2  Digs SdLn TD UnBlvbl 152.49 128.39 7 8.53 8.99 -0.46 -1.53 -1.99
icon L 5  Theological Giants 118.62 115.79 7 5.98 4.65 1.33 1.02 2.35
icon L 6  Hamilton Economists 130.53 121.46 6 6.89 6.52 0.37 -0.89 -0.52
icon L 7  Not Gonna Lie 129.08 125.09 6 6.12 6.31 -0.19 -0.12 -0.31
icon L 8  Wood Street Wonders 115.10 119.46 6 4.77 4.08 0.69 1.23 1.92
icon L 12  Go Blue 108.89 128.44 1 2.57 3.11 -0.55 -1.57 -2.11

  • As unlucky as Digs has been, apparently GoBlue is unluckier! Looking at their schedule it’s true, they had a bunch of close losses. Still, these numbers are historically high.
  • The Theological Giants must please God, because they have 2.35 more wins than would be expected if they distributed their points in an average manner against average opponents. A full point comes from their league weakest schedule, and another 1.3 is from timing.
  • IncidentalPunishment and the Wonders are up there as well. Three of the five teams fighting for playoff spots have major luck boost.
  • The Pack sits at an insane 10-1, which is just crazy if you remember Yahoo! had predicted them going 0-13!! Oops! But winning that much takes at lest some luck, and for them it seems to be timing based. They’ve had some nice, close wins (Broncos, Giants), and have generally upped their game when needed.

Fantasy Playoff Probability 2018 wk11

Only having three weeks cuts down on the number of possible outcomes, however, I did enhance my predictions by adding in uncertainty for the tie-breakers. Previously, whichever team was forecasted to have higher score was ranked higher, which meant the probabilities skewed towards those with more points. This time, I tried to calculate the probability that, for example, the Giants close their 160 point gap in the next couple weeks.

I also realized I was calculating byes incorrectly. The byes go to the division winners, not the top two. This change decreased the Jets chances (they’d have to beat out the Pack to get a bye), and increased the chances for me and IncidentalPunishment.

icon Team (record) E(Wins) E(Rank) Playoffs(%) Bye(%)
icon Pack (9-1) 10.45 1.25 99.99 87.04
icon Vikings (7-3) 8.68 2.84 98.50 73.82
icon Jets (7-3) 8.84 2.87 99.18 12.95
icon Economists (6-4) 7.70 4.53 87.17 17.32
icon IncdtlPun (6-4) 7.73 5.21 77.91 8.85
icon NotGonnaLie (5-5) 6.74 6.29 52.24 0.00
icon Giants (6-4) 6.95 6.38 46.43 0.02
icon Wonders (5-5) 6.82 6.76 38.45 0.00
icon Broncos (3-7) 4.43 9.67 0.09 0.00
icon Fourth (2-8) 3.56 10.29 0.00 0.00
icon Wentz (3-7) 3.99 10.30 0.04 0.00
icon Blue (1-9) 2.11 11.62 0.00 0.00

Even though I won, my playoff chances only increased by 2%. Some of this was because I was expected to beat the Blues, but some was due to the change in methodology (because I’m ahead in points, the naive tie-breaking gives my a 92% chance of making the playoffs).

Top Tier 2018 w11

Middle Crew

Bottom Rung

 

Teaming Data –> RootNPI

The DocGraph / CareSet team does great work and I have personally benefited from the availability of their original CMS teaming data, even using it in a chapter from my dissertation.

They recently updated their methodology and created a new group of datasets they call “root NPI“. Along with this update, they will no longer be updating the original format teaming data. While I understand the need for this change, the fact that they have neither updated the original data, nor retroactively created the new RootNPI data (beyond 2014) is a problem for me as I use the time variation in these datasets and would like to be able to add years.

To get around this limitation I created a method, to a fairly close approximation, creates the new data sets from the old, and therefore allows me to perform analysis on data from 2009 to 2015. The idea is to take the 180 day files and make them symmetrical. My commented SAS code is here, but the main commands are:


/* Duplicate the teaming data, switching NPI_Number1 and NPI_Number2 */
DATA phy_ref_2014_180_x2;
SET ref_med.phy_ref_2014_180(rename=(NPI_Number2=NPI_A NPI_Number1=NPI_B))
   ref_med.phy_ref_2014_180(rename=(NPI_Number1=NPI_A NPI_Number2=NPI_B));
run;

/* Choose the NPI pair with the most patients */
proc sql;
CREATE TABLE npiroot_2014_180
as
SELECT NPI_A, NPI_B
   , MAX(Bene_Count) as patient_count
FROM phy_ref_2014_180_x2
GROUP BY NPI_A, NPI_B
; quit;

Complete data exist from both data sets for 2014, which allows me to compare the effectiveness of my transformation method. Here are some statistics from the comparison:

  • For the pairs that match, there is a very high correlation between the two (0.97938, see scatter below)
  • While 37.5% of the pairs do not match (50m of 185m), these pairs only account for ~10% of the total number of shared patient connections (~800m of ~7.5B)
  • It looks like most of the missing connections happen right near the 11 cutoff.
    • In fact, for 30% of missing pairs the present pair has a patient count of 11
    • It is 19% for 12, 13% for 13, 9% for 14
    • 90% have fewer than 22, 95% <33, and more than 99% fewer than 100
  • There seems to be a decently large, non-random set of providers that are in the new data, but not in the old. They all seem to be medical device related. Here are the top 10: Arriva Medical, Degc Enterprises, Lincare, All American Medical Supplies, United States Medical Supply, Med-Care Diabetic & Medical Supplies, Ocean Home Health Supply, Binson’s Hospital Supplies, Passaic Healthcare Services, DJO
  • There is not a similar pattern for providers in the old data, but not in the new. For consistency across years, I will probably exclude the above set of providers from my analyses.
  • Here is a comparison of the two datasets in terms of number of patients (the strength of the connection (RootNPI vs Constructed)
    • Median: 21 and 21
    • Mean: 41.6 vs 43.4
    • 95 percentile: 128 vs 134
    • 99 percentile: 347 vs 378
    • Standard Deviation: 104.5 vs 116.6
  • It may seem odd that the constructed data set has a larger average than the new, Root NPI data, since the new one is using the full year to define a connection, while the old data set used the 180 day window. I think what accounts for the discrepency is the fact that the old data set included connections that happen 6 months after Dec 2014, which the RootNPI omits.

Finally, to ensure that there is not any odd systematic variations between the two measure, I created a scatter plot comparing the patient count I calculated from the original teaming data with the new RootNPI patient count. I truncated the plot at 1000, both because there are just too many obs <1000 and because I am mainly interested in what happens to the relationship as both get large.

To me, this looks really reassuring. The two measures seem to be similar, with some noise, and this noise appears to get smaller as the number of patients gets larger.

Mendeley – Fixing Author and Journal Metadata

I use Mendeley to store, organize and manage my library of academic papers. It’s tagging and search features are excellent, and Mendeley helps keep the process of finding previously read literature manageable. At times I hear or see an author’s name and wonder what papers of theirs I have previously read. In Mendeley, you can select and view all of the papers by a single author quite easily. However, there is an issue with variations in names. When you filter by author, Mendeley has no way of knowing that “Mark Pauly”, “Mark V Pauly”, “MV Pauly” are all the same person. To fix this using the interface would be a painful, manual task involving selecting and editing each individual paper.

Fortunately, Mendeley stores the local library information on a relatively easy to access SQLite database, and I know SQL. What I did, and will show, was to find probable duplicate names and merge them together using SQL code.

Step 0: BACK UP YOUR DATABASE!

The main databases are stored in local AppData folder (“C:\Users\[Your_Windows_UserName]\AppData\Local\”) under “Mendeley Ltd\Mendeley Desktop”. What you are looking for is a file similar to “dl679@cornell.edu@www.mendeley.com.sqlite” . BACK THIS FILE UP BEFORE PROCEEDING.

Step 1: Get and install a program to read and write SQLite

There are a variety of tools out there, but I used, and can recommend, SQLiteStudio (https://sqlitestudio.pl/index.rvt).

Step 2: Connect to the database.

Open up SQLStudio and add the databse you located in step 0, then connect to the database (2). Once you have it open, you should be able to see the tables that Mendeley uses. The main one is “Documents”. This table has one entry for each of the articles in your database. The authors are stored in “DocumentContributors”.

Step 3: Find suspected duplicates

The following code generates a good list of authors that are probably duplicate entries.


SELECT DocumentContributors.LastName, DocumentContributors.firstNames, count(Distinct documents.id) as Num_Papers, max(Num_Papers_LN) as Num_Papers_LN
FROM DocumentContributors
INNER JOIN documents
on DocumentContributors.DocumentID = documents.id
INNER JOIN (
   SELECT LastName, count(Distinct documents.id) as Num_Papers_LN
   FROM DocumentContributors
   INNER JOIN documents
   on DocumentContributors.DocumentID = documents.id
   GROUP BY LastName
   ) as ln
on ln.lastName = DocumentContributors.lastName
GROUP BY DocumentContributors.firstNAmes, DocumentContributors.LastNAme
HAVING Num_Papers_LN > 8
ORDER BY Num_Papers_LN desc, DocumentContributors.LastNAme, Num_Papers desc;

I limited my list to authors (by last names) with more than 8 papers, as I did not want to spend too much time cleaning up smaller authors.

Step 4: Identify and update duplicates

Once I have my list I check to see if there are more than one author with the last name. The idea is to create a search string that uniquely identifies an author, than standardize the first name. For example, I searched for:


SELECT *
FROM DocumentContributors
WHERE lastName = 'Town'
and firstNames like "%";

In my database, all of the authors with a last name “Town” were in fact “Robert J Town”, so I could safely run the following update statement to standard the first name. I chose to omit periods, and use the first initial of the middle name, but your standardization procedure may differ:


UPDATE DocumentContributors
SET firstNAmes = "Robert J"
WHERE lastName = 'Town'
and firstNames like "%";

If there are multiple authors with the same last name, you can include a first initial before the % to filter out based on that. Always check to see what is going to be impacted by your query before running an update statement.

Step 5: Make Mendeley update the search index

Finally, to get Mendeley to update the search index (including the index of authors), with Mendeley closed, delete the files in “…\AppData\Local\Mendeley Ltd\Mendeley Desktop\www.mendeley.com\dl679@cornell.edu-3ab2” (your final subfolder will differ). BACKUP THESE FILES FIRST. Deleting (and having Mendeley rebuild) this file could very well speed up search if it has become slow.

One caveat, this approach will not update the information in your library on Mendeley.com, and will not sync to other computers. I think this can be accomplished by updating the “eventLog” and “eventAttributes” tables, but I didn’t have the time to write up a sufficiently automated process, but think something could be done fairly easily using Python.

Step 6: See which authors you read the most


SELECT DocumentContributors.LastName, DocumentContributors.firstNames, count(Distinct documents.id) as Num_Papers
FROM DocumentContributors
INNER JOIN documents
on DocumentContributors.DocumentID = documents.id
on ln.lastName = DocumentContributors.lastName
GROUP BY DocumentContributors.firstNAmes, DocumentContributors.LastNAme
HAVING Num_Papers > 5
ORDER BY Num_Papers desc;

A side benefit of this project is that I can see which authors are particularly important to my research by running the above code. Not surprisingly, my advisor’s advisor, Marty Gaynor, tops the list (along with the very prolific Larry Casalino). Amitabh Chandra, David Dranove, Bruce Landon and Robert Town are next up (can you tell I research health economics!?).

I also used a similar process to clean up the journal names, and then checked which journals I read most often. Health affairs topped the list followed by NBER Working Papers. There is a big gap between the next couple: Journal of Health Economics, Health Services Research, Journal of Economic Perspectives and American Economic Review.

Olympics 2018: Medals Recap

Some thoughts on the results of the 2018 Olympics:

Most pundits agree that this was a disappointing for Team USA. The haul of 23 medals is 5 fewer than 2014 and 14 down from 2010. The USOC had set a target of 37, with the expectation of at least 25, and the hope of up to 59. This decline is total medals is more severe when you consider that many of the medals were in US friendly events that previously did not exist (11 from snowy pursuits snowboarding and freestyle skiing). The USOC walked away with the same number of gold medals they have received since 2006 – 9, with 2002 only being one higher.


However, despite the shortfall in total medals, Team USA did have some notable victories: First, the women’s hockey team. As someone who attended two schools where hockey is the major sport (Colgate and Cornell… go Colgate!) I really enjoy hockey. I watched the utter heartbreak of the US women’s 2014 loss, made extra difficult by the fact that in the Olympics there is no “get ’em next year”. It’s get them in 4 years… This year the gold medal game lived up to the hype, including the final outcome. Second: the improbable victory in curling beating out both Canada and Sweden (which is another recent event predicted by the Simpsons more).

Here are some assorted notables:

  • Norway, Germany and Canada had great Olympics.
  • For Germany, it is return to form, after a bit of a slide from 2002-2014.
  • For Canada, it demonstrates that 2010 was not just a fluke and their rise to prominence is likely here to stay (unexpected losses in both Men and Women’s hockey, and Curling aside).
  • A historical note: It is crazy to think that in 1988 the sum total of golds for Canada, Norway and the USA was 2 (4% of the total). Lately it’s been around 33% (including this Olympics)
  • Russia/OAR dropped down to 2 golds. Some of it was undeniably the ban, but their haul of 13 in Sochi was a bit of an anomaly. They took home 3 in 2010 Vancouver
  • South Korea did not seem to receive much of a hosting bump. Their gold medal total was between their 2014 and 2010 count, and their total medal count has been steadily increasing since 2002.

Here is graph of the total medal count over time:

And here is the gold medal count over time:

1. The other prediction being the Trump presidency, which they predict will be followed by Lisa Simpson. Interestingly, Ted Cruz just argued that Lisa is a democrat.

Fantasy Playoff Probability

I crunched the numbers for my fantasy football league (methodology details below) and here are the results:

Team (current record) E(Wins) E(Rank) Playoffs 1st Round Bye
Cowboys (8-2) 10.19 1.27 100.0% 93.4%
Jets (7-3) 9.00 2.82 98.7% 61.3%
Vikings (6-4) 8.12 3.21 97.9% 6.1%
Pack (6-4) 7.78 4.34 89.4% 30.6%
Fourth (6-4) 7.39 5.51 74.9% 8.1%
Economists (5-5) 6.77 6.09 59.5% 0.1%
Eiferts (5-5) 6.61 6.08 59.0% 0.4%
Broncos (6-4) 6.59 7.41 17.2% 0.0%
NotGonnaLie (4-6) 4.73 9.64 3.2% 0.0%
Giants (3-7) 3.58 10.60 0.2% 0.0%
Wonders (2-8) 3.82 9.22 0.0% 0.0%
Blue (2-8) 2.54 11.80 0.0% 0.0%

The probability distribution of each team and their rank (grouped by tier) follows:Next week the Economists will play the Eiferts in what will almost be a playoff game. The winner has around a 90% probability of making the playoffs, while the loser has around a 30% probability.Even though the Broncos have a game on both the Economists (me) and the Eiferts, the algorithm is pretty bearish on the Bronco’s chances. Part of that is their 20 points below me (and 120 below the Eiferts) in the tiebreaker, but part of it is their more difficult schedule.Cowboys are definitely in. Jets and Vikings are almost surely in. Blue is almost surely out. The real battle is for those last two spots (5+6).

Top Tier 2017 11 16

Middle Crew 2017 11 16

Bottom Rung 2017 11 16

 

 

P.S. Methodology may follow in an update. Short version: used a version of Pythagorean expected wins to compute win probabilities (exponent of 6) – this seems to be similar to what Yahoo uses in their projections. Had to “guess” at some lineups since some teams have players on byes in future weeks. Then just computed the probabilities for each of the 262,144 possible win / loss outcomes (2^6=64 outcomes per week for three weeks). Also, had to make assumptions about points scored for the tiebreakers (gave the winner the greater of the two expectations).

Olympics 2016 Rio

It’s become a bit of a biannual tradition for me to write an about the Olympics – specifically the distribution of medals between countries.

I don’t have any sport by sport insight into how these games will look, but judging from the last couple years this should be another showdown between the US and China over Olympic supremacy. In three of the last four Olympics the United States has edged out China with China’s lone victory coming as a big win when they hosted in 2008.

Other medal count story lines to watch:

  • Can Britain build on their impressive growth, or was 2012 only about the home country bounce?
  • How will Russia do? Historically they were a powerhouse, but the breakup of the Soviet Union and the struggles of their economy let to a prolonged slump. Their economy has improved and they have seemed determined to reassert themselves on the world stage. Will that show up in their medal count? As a result of a huge doing scandal some of their athletes were banned. How much of a factor will that  be?
  • How will Brazil do? There’s been a lot of press coverage about how Rio want ready to host, but is Brazil ready to compete?
  • Will Japan see a spike add they prepare to host in 2020?

Winter Olympics Medals Over Time – Post Olympics Update

The Winter Olympics have now closed. The host nation Russia walked away with both the most medals and the most golds. The increase was pretty dramatic. Russia only managed to pick up 3 gold medals in Vancouver but napped 13 in Sochi. Canada, while lacking the home country bounce they had in 2010, continued to be a top competitor. Norway had another good games reinforcing the fact that their meager haul in Turin 2006 was just an aberration.

A lot of pundits have called this year’s games a huge disappointment for the USA (ESPN: Team USA disappoints in Sochi)However, by historical standards the 2014 games were pretty standard. The USA pulled in the same number of gold medals as 2010, 2006 and only one down from our all time high in 2002 (which we hosted). In both the Summer and Winter Olympics consistency seems to be the name of the game for the United States as I pointed out in my first summer Olympics post. Though, as I mentioned before, the number of events and medals have been increasing so in terms of the percent of gold medals the United States is slipping.

Winter Olympic Gold Medal Counts 1988-2

The total medal counts show pretty much the same story. The United States failed to defend their total medal lead – however, hopefully this time it won’t take us 78 years to be back on top! As with the Summer Olympics it appears that the home-country bounce is more pronounced in the count of gold medals rather than number of total medals. To me this is counter-intuitive and is begging for some good statistical analysis to investigate potential systematic judging bias in favor of the home country. Perhaps if I find myself with some extra time (highly unlikely) I can look into that.

Winter Olympic Total Medal Counts 1988-2

Unlike host country Russia, Germany continued to struggle to regain their passed Winter Olympics glory. They took home the fewest gold medals since 1972 (combining East and West) and fewest total medals since 1968! (See the German/Russian dominance in my original Winter Olympics post)

Probably the most surprising country was the Netherlands. Because historically they were not a big player, I did not even include them in my charts. At one time they led the total medal count and finished with 24 medals and 8 golds. Previously the Netherlands’ highest haul had been 11 medals and 5 golds  (1998) and in 2010 they took home a total of 8 medals.

Winter Olympics Medals Over Time

My Summer Olympics post two years ago was fairly popular, so now that I have a brief respite from graduate school deadlines I put together a couple of charts showing the Winter Olympic Medals over time.

Winter Olympic Gold Medal Counts 1988

There are significantly fewer events, and thus medals in the Winter Olympics – currently 302 to 98. That makes the Gold medal counts a bit more noisy (small sample size), so I’m also including a chart total medal counts (unweighted).

Winter Olympic Total Medal Counts 1988

Both charts show Canada and the United States rising. One thing that these charts are missing is the fact that the total number of events has been climbing significantly as well – from 46 in 1988 to 86 in 2010 and this year’s games features 98 events.

Number of Winter Events 2

Another fact that is not captured by the narrow window of years featured above is how dominant Germany and the Soviet Union were beginning in the early 1970s. From 1972 to 1998 they combined took home 40.5% of the gold medals. The last time the United States led the gold medal counts for the Winter Olympics was 1952 – which makes the lead of this story (The United States has a case of Olympics medal envy) seem a bit odd and ill-informed. The 2010 Winter games was the first time that the United States had led the total medal count since 1932 – a games which were hosted by the United States and only featured 10 other countries. Even then, we only beat out Norway by 2 medals.

Winter Olympic Gold Medal Counts 1952

So congratulations to Canada on becoming a Winter Olympics power house – because let’s be honest, you don’t have all that much else going for you. But that success and growth has been alongside the United States and not at her expense.

Is Congress About to Accidentally End the Shutdown

The House just unanimously passed a bill allowing federal workers to get back pay. The Senate and President are in support so this should soon be law. This measure was not controversial. Even Congress realizes that they, not the Federal workers, are the reason these people are not working and Congress, not the workers, is the problem. While they are getting paid without working (government efficiency?) that workers should miss car payments, house payments, and be unable to pay other crucial expenses while Congress figures stuff out is a hard position to defend.

However, in doing so, they may have un-wittingly removed the legal justification for the shutdown itself. Modern government shutdowns date back to the Carter administration’s interpretation of the, at the time obscure antideficiency act (for an overview of the history see here or here). In short, government cannot enter into a contract without funding – so workers cannot work with the expectation that Congress will fund them (even though  they probably will). Now with the passage of a bill to guarantee backpay (here), workers would no longer be working with a hope that Congress might or expectation that Congress will fund, but instead a legislative guarantee.

A lawyer may need to take a closer look, but from this layman’s eyes it is time for Federal workers to go back to work, and they don’t need a dysfunctional Congress to agree first. Perhaps in this case two wrongs of Congressional dysfunction have made a right?