Welcome! The goal of this blog is to share my analysis of the free, publicly available user-reported law school applicant data from Law School Numbers. Using the data from Law School Numbers is problematic for a variety of reasons (such as users misreporting their actual information, users creating fake accounts, selection bias, etc.) and if I had access to it, I'd much rather work with the data that schools themselves have on applicants. We have what we have, though. Also, while I do have some facility with the type of statistical analysis I employ in my blog posts, I am far from being a professional statistician. I am doing this solely for the purpose of providing my analysis to interested readers, getting feedback, and generating discussion. What I am not doing is prescribing courses of action for law school applicants, or pretending to actually know what goes on behind closed doors in law school admission committees' meetings. I am, however, interested in looking at the story the numbers seem to portray, and sharing that with people with similar interests. I think I'll be able to provide a lot of interesting, and perhaps even helpful, analysis here, but at the end of the day, it is up to the individual law school applicant to put together applications and application strategies tailored to his or her own hopes and goals.

Monday, November 4, 2013

Lack of updates

Hey all,

I have started school myself, so time is at a premium.  Consequently, I haven't really done much with this site for a while, but I occasionally get e-mails from people who have found it helpful, so it´s good to know people are checking it out.

If anyone has any specific questions or anything, I´d be happy to do my best to answer them, and when some time frees up for me, maybe do a little more here.  Until then, good luck with your application cycles!

Wednesday, August 14, 2013

Stop the Presses!

A recent thread on TLS reminded me of something I have been planning to do all along and, honestly, should have just done from the start, and that is do separate analysis for data before and after applications started dropping of precipitously for the 2010-2011 cycle.  The short story is that Anne Levine, the chief consultant at Law School Expert, a law school admissions consulting firm, at some point in the past said that applying early on in the cycle was important, but has recently said that it is not.  Ms. Levine's own blog post on whether or not she "contradicts" herself is here.

The first thing I want to point out is that my analysis supports, just as Ms. Levine states in her first point of her blog post, that applying early ASSUMING your numbers wouldn't be any better if you applied later is the correct answer.  If, however, you suspect you could raise your LSAT by even a point or two by retaking in December, you absolutely should retake and apply later, as the advantage associated with even a couple extra LSAT points almost always outweighs whatever advantage would be gained by applying earlier.  I actually stated this in my own previous blog post about the advantage to applying earlier.

She goes on to further state that
The importance of applying early was definitely diminished in the Fall 2013 cycle, as evidenced by the fact that I had someone with a 3.4 and 163 who applied to a Top 20 law school in June and got in off the waiting list. That never would’ve happened in 2010!
This, of course, is not much proof at all that the importance of applying early was definitely diminished in the Fall 2013 cycle, as it is a single anecdote.  So, of course, I wanted to look into it a little further myself.

From the moment the idea of starting this blog occurred to me, I have been planning on comparing how things looked prior to applications started falling off with how things look since they have started to fall off.  Why I didn't just do this from the outset, I'm not sure.  My major goal with this site is to have it be a useful resource to people currently in the process - or thinking about - applying to law school.  Obviously, if things did change with the decrease in applicants, the more recent data is going to be a lot more relevant.  The recent thread on TLS and Anne Levine's response kind of shook me awake on this one, I guess.

So...first things first.  Below, I am posting the "boosts" associated with each earlier month an application is submitted, just like in my last post, except this time I am posting it for the T14 for the 2003/4 cycles through the 2009/10 cycles on the left, and from the 2010/11 cycle to the 2012/13 cycle on the right, so we can compare.  And what do we find?  That Anne Levine is largely right, at least as concerns the T14:

In all schools except Chicago and Penn, the percent-increase in an applications likelihood of admission for each earlier month applied decreases, and in most cases significantly, in the post-2010 period as compared to the pre-2010 period.  In the cases of Stanford, Columbia, NYU, Duke, and Cornell, it disappears entirely (and, of course, it was never there for Yale anyway).  This lines up quite nicely with what Ms. Levine claims.  The real surprises then, are Penn and Chicago, where the boosts have not only increased, but have increased substantially since applications started falling.  Thoughts?

Over the next few days (when I'm not working at the job that actually pays me), I'm going to break this down further and see how the "boost" has evolved over time, cycle-by-cycle for the T14.  Until then!

Tuesday, August 13, 2013

By request, a chart!

A reader asked that I post at least one chart for a school demonstrating an advantage to earlier applications.  This is actually a little trickier than it might seem.  At the person's suggestion, I decided to give this a try with the U of Chicago, since it appears to give the biggest boost in the Top 14 to earlier applicants.

The reader seemed to suggest that I plot the % of applicants who were accepted by month that the application was submitted (at least that was how I understood the request).  This is a little problematic, though, because the applicant pools for each month are likely not equal.  Some people have theorized that applicants with stronger numbers don't have stronger numbers randomly, and might be the types of people who also are going to get their applications submitted earlier, and in this case we'd just expect earlier applicants to have more success because they are better overall applicants regardless of when they submitted their applications, and so higher acceptance rates for earlier applicants wouldn't tell us much.

To be honest, this isn't a super easy problem to work around, but I gave it a shot.  What I have done is taken the LSAT:GPA ratio suggested for Chicago by Law School Predictor, which is 3.04 to 1 (I'd much rather use an actual published index to do this, but Chicago does not provide one), and developed an "index" score for each applicant based on that other words, it's just a figure that indicates the numbers-strength of the applicant.  I then determined the average index number for all applicants, and calculated the percentage of applicants above the average who were accepted, by month of application submission.  This way, we at least partially control for difference in the strength of the applicant pools for each month, and instead of looking at just the percentages of applicants who were accepted, we're looking at the percentages of "strong" candidates who were accepted.

So, that pretty much does it for explanation.  Below is the chart:

As you can see, there is a definite downward trend in the percentage of "strong" applicants who were accepted as you go deeper into the cycle, and the drop off from November to December is especially steep.  In terms of the number of applicants, there is an upward trend from September to November (in which applications peak), and then a steady decline, with a very sharp drop in February.

Although the graphic above does not account for how far accepted applicants were from the mean, I think it definitely visually demonstrates an advantage to applying earlier (although it doesn't control for anything other than LSAT or GPA, the regression analysis on which I based my original "early application" boost post does).

So, there you have it, in pictures.  Well, a picture.  If anyone is interested in seeing how other schools look, let me know.

Monday, August 12, 2013

Applying Earlier at the T14: Does it Matter, and How Much?

Yesterday I posted regarding the binding Early Decision applications to the Top 14 law schools, to see if there seems to be any truth to the notion that applying ED to a school can give an applicant a boost in his or her chances of acceptance.  Today, we'll look at the effect (if any) that applying earlier in the cycle has on those chances.

First, a little explanation.  In organizing the data, I broke applications down by the month in which they were submitted.  The results you will find in the tables below correspond to the increase in one's chances of acceptance correspond to each earlier month the application is submitted, which is different from each month earlier.  In other words, there is an X% increase in your chances of acceptance if you apply in September rather than October, or October rather than November, etc.  This creates a little bit of a problem, because technically, a September 30th application is coded as a September app, and an October 1st application is coded as an October app.  It would probably be foolish to think that any of the % increases you see in the charts would apply to an application sent out one day earlier, as the case would be in the aforementioned example.  Still, the charts are instructive for giving a general idea of how schools treat earlier applications.  I will break the data down into four charts: the entire applicant pool, the non-splitters, the splitters, and the reverse-splitters.  The percentages in each chart correspond to the increased likelihood of admission for an applicant submitting in one earlier month.

Entire applicant pool:

As you can see, every school except Yale demonstrates at least some boost for applicants submitting earlier.  So, when Yale tells you it really doesn't matter if you apply in September or February, it appears that they mean it.  Even the lowest school on the list, NYU, provides almost a 20% increase in an applicants chances of admissions for submitting in September as opposed to October, October as opposed to November, etc.


Not a whole lot different from the first chart, with the exception that for some reason I have decided to call UVA "Virginia" in this chart.  I was simply too lazy to go back and fix it.


Here things get a little interesting.  Almost all the boosts increase (at least where they still exist), and the Penn, Duke, and Harvard boosts increase substantially.  Other schools, however, completely drop off the list, indicating no statistically significant advantage to splitters applying earlier at these schools.


And, as usual, reverse-splitters kind of get shafted.  With the exception of Penn, which gives an enormous boost to splitters applying earlier than their later-applying counterparts, there isn't a single reverse-splitter earlier application boost to be found.  NYU came very close to statistical significance on this one, but didn't quite make it, and rules is rules.

So, there you have it.  If you ever needed some hard evidence to tell you to get it in gear and get that application out, there it is.  This comes with some obvious caveats, though!  For one thing, don't send in a subpar application, thinking that it's more important to get it in early.  Because I can't measure things like letters of recommendation, resumes, or essays, I definitely don't recommend sending in rushed or poorly thought out examples of any of these simply in order to turn the application in earlier.  So, let me rephrase what I said before: get it in gear and get that super-polished application out.

Also, one of the most common questions I hear comes from applicants who are trying to decide whether to apply as early as possible, or re-take the LSAT.  Although there a boost to applying earlier for most students at most schools (and sometimes substantially so), any "lost" early application boost is more than compensated for by an LSAT point or two, so as long as you think you can increase by a few points, retaking is often the right answer to this question.

Sunday, August 11, 2013

To ED or not to ED to the Top 14?

Application season is almost upon us, and many anxiety-ridden applicants will be considering whether or not to apply via binding Early Decision application to their dream schools.  There is conventional wisdom out there that says that applying via binding Early Decision signals to a school that you are very committed to attending that school, and therefore boosts your chances of acceptance, which makes this option very attractive for those applicants who are dead set on attending a certain school, or would very much like to attend a school but are unsure that their numbers are high enough to get in.  Another piece of conventional wisdom says that, by being accepted as an ED applicant, a school essentially has you cornered and therefore has little reason to offer you financial incentives (i.e. scholarship funds) to entice you to attend.  Because the data I have on scholarship awards is very sketchy in its current state, I can't really address this second point, but it is certainly worth noting and makes all kinds of common sense.

The first point, though, I think can be examined by using the same regression analysis I have used thus far.  I will focus only on my second model, in which I only consider applicants who were either accepted or rejected by schools (in other words, waitlisted candidates whose fates are unknown are left out of the analysis).  I will deal only with the Top 14 schools in this post, because most of them offer a binding ED application option, and because outside the T14, there isn't a whole lot of binding ED to be found.

With all that said, let's take a look at the numbers.  In the first chart, I'm positing the results considering the ENTIRE pool of applicants who were either accepted or rejected at these schools. As always, these are the results after controlling for LSAT score, undergraduate GPA, URM status, timing of the application, nontraditional status, and gender.  Our variable of interest here is simply the increases in the likelihood of acceptance for an applicant who applies ED as opposed to RD.

Notes here:
- The first number is the number of observations the regression is based on, and the second is how many times more likely ED candidates are than identical RD candidates to be accepted (in other words, Michigan ED candidates are 1.166 times more likely to be admitted than identical RD candidates, and Duke ED candidates are 4.222 times more likely to be admitted than identical RD candidates).
- Georgetown actually seems to disadvantage ED applicants, as they are 31% less likely to be admitted than identical RD applicants.  I have no idea why this might be.  We see this same results at George Washington, but because all ED accepted applicants to GW are given substantial scholarships, the result there makes perfect sense.  For Georgetown, not so much.  I'm definitely interested in what hypotheses you all might have about this.
- I have an asterisk beside Northwestern because of Northwestern's recently instituted policy of giving full rides to accepted ED applicants.  These regression are based on data going all the way back to the 2003/2004 cycle.  One would expect Northwestern's number to be similar to GW's (which isn't listed here), but actually in the first year that Northwestern implemented the program, their ED boost was enormous. 
- N/A is for schools that don't offer binding ED, and NSS means "not statistically significant" and indicates that a school had no boost associated with ED applications in my analysis.
- The University of Virginia has the biggest ED boost, which is no surprise, and confirms the conventional wisdom that is "ED UVA!"

Next, we'll look at the ED boost associated with non-splitter applicants (applicants who I don't classify as splitters nor reverse-splitters):

Notes here:
- This time around, it seems like Michigan disadvantages non-splitter EDing applicants.  Thoughts?
- Northwestern drops off the list.

Next up, the splitters:

- The boosts are generally much bigger for splitters, where they actually exist.  
- Duke drops off the list here.
- Oh my god, Georgetown.

Finally, the reverse-splitters:

Only one story here, but it's a big one: UVA gives a massive boost to reverse-splitters who ED.  The sample size is reasonably small at 58, but given the relatively small number of variables we are controlling for, the results are valid.  

So, there you have it - the boosts associated with ED at the Top 14 across applicant categories.  In a (near) future post, I will rank schools by the boosts they give for earlier applications, which seems to be, in general, a bigger deal.

Tuesday, July 16, 2013

Which schools are splitter-friendly? (Non-URM edition)

The question of which schools are "splitter friendly" comes up pretty often, and it's not really an easy question to answer.  Are we looking for schools to which a high percentage of splitters are admitted relative to non-splitter applicants?  Schools that seem to value an applicants LSAT score much more than his/her GPA?  How about schools that are willing to go really low on the GPA scale to nab those high LSAT scores?

There is actually a lot of overlap between those questions, but they're not all the same thing.  There is a ton of anecdotal evidence out there, but the point of this blog is to try to get to the bottom of what the numbers themselves can tell us.  With the "splitter friendly" question, it's not all that easy.

What I have done here is try to create an index number that incorporates information to answer questions posed at the beginning of the post.  Both that index, and the data used to compute it, are found in the table below.  I have to stress that because there is so little data available - and especially little data available on URM applicants - the following applies to non-URM applicants only, and is based entirely on non-URM applicant data.  I excluded URM data because it can really skew the overall picture, and I thought it best to do them separately (although I'll have to approach analyzing URM splitter data differently, due to a serious dearth of data).  Just to break down the categories for you:

Non-Splitter GPA: This is the mean GPA of admitted non-splitter applicants.
Splitter GPA: This is the mean GPA of admitted splitter applicants.
GPA Differential: This is simply the difference between the previous two categories, and gives an indication of how much lower on the GPA a school will go compared with its average in order to chase high LSAT scores.
LSAT Bump: This is a number from my own regression analysis, and indicates the % increase in the likelihood of admission for each additional LSAT point an applicant has.
GPA Bump: This is the GPA equivalent of the LSAT bump (the % increase for each .10 GPA).
LSAT/GPA Differential: This is the LSAT Bump divided by the GPA Bump, to give us a measure of the relative importance of the two.  The higher the number, the more relative weight the LSAT has.
Splitter Success: This is the % of splitter applicans in the data who were accepted.
Non-Splitter Success: This is the % of the non-splitter applicants in the data who were accepted.
Splitter vs. Non Success:  This is Splitter Success divided by Non-Splitter Success, and gives us a measure of how splitters fare vs. their non-splitter counterparts.  If a school admits splitters at a higher percentage than non-splitter, the number will be greater than 1 (and if the opposite is true, it will be less than one).  The higher the number, the greater indication that the school favors splitter applicants.
Index: This is the number I concocted to take into account the salient data from the other categories.  It is simply (GPA Differential + LSAT/GPADifferential) * Splitter vs Non Success.  The higher the number, the more splitter-friendly a school is.

The mean index number for the schools included is 2.68, so I set that as a benchmark, and then broke the schools down into five categories:

Very Splitter Friendly: These schools have an index number that is more than two standard deviations above the mean (Dark Green)
Splitter Friendly: These schools have an index number that is between one and two standard deviations above mean. (Light Green)
Neutral-Friendly: These schools have an index number that is between the mean and one standard deviation above. (Yellow)
Neutral-Unfriendly: These schools have an index number that is between the mean and one standard deviation below. (Orange)
Splitter Unfriendly: These schools have an index number that is more than one standard deviation below the mean. (Red)

No schools were more than two standard deviations below the mean, although Stanford was right on the cusp.  The neutral-friendly and neutral-unfriendly categories make me a little squeamish.  Since they are all within one standard deviation of the mean, they're all pretty average.  In the end, I decided it was better to distinguish between the above-average and below-average middle, though.

So, there you have it.  And next up, the numbers themselves, with the schools ranked in order of USNWR ranking:

So, there you have it. Remember, this is categorizing schools by their relationships to each other.  As you can see, there's not a whole lot of splitter love, in the grand scheme of things, going on in the Top 14.  Still, just as we can compare all the schools among themselves, we can isolate the Top 14 and do the same thing.  And since someone is surely interested in how that shakes out, why don't we just do it right now?

For the Top 14, I kept the categories and color coding the same, but based everything off the mean index score of just the T14, which was 1.55 (much lower than the overall average).  Here are the results:

Northwestern and Georgetown are the only schools we could call splitter friendly, and Columbia, NYU, Penn, UVA, and Duke all fall on the friendly side of average.  This more or less confirms the conventional wisdom you hear thrown around, but I guess I should stress once again that there's really not a whole lot of splitter friendliness in the T14 (outside Northwestern, I guess).

Thursday, July 11, 2013

Ranking the schools by LSAT boost, waitlisted candidates included

Hey all.

I finally made it through the initial push in crunching and organizing numbers.  There is still a ton that can be done, and surely a lot I haven't thought of yet, but over the next couple weeks I am going to be posting preliminary results.  The first is from my Model 1, which includes accepted, rejected, and waitlisted applicants (I will shortly release the same ranking for my Model 2 results, based on data that only includes acceptances and rejections).

In the table below you will find schools ranked by the "boost" at each school associated with a one-point increase in LSAT scores of candidates.  The % number associated with each school is the % increase in the likelihood of an applicant being accepted vs. waitlisted or rejected (and also the % increase of an applicant being either accepted or waitlisted vs. rejected) for each additional LSAT point (think 169 vs. 168 here).  There are floors for the LSAT, for sure...a 148 is not X% more likely to get into Harvard than a 147, after all.

One of the problems with this is that there are almost certainly diminishing returns past a certain point with your LSAT score, so it's not a straight linear proposition.  Still, the goal here isn't so much precision (God, if I can just get one more LSAT point I'll increase my chances by X%!) but to both give a rough idea of how schools weigh the LSAT, and comparing the schools among each other.

Again, please remember that this is all done based on user-reported applicant data at Law School Numbers, with all the caveats that go along with it.

I'm not sure how many more ways I can disclaim this whole thing without sounding like I'm actually saying, "Pay not attention to what you're about to read", but again...there's no crystal ball here and I'm not a stats PhD.  Take this for what it's worth, and ask any questions you have.

Without further ado, 100 law schools ranked by LSAT boost:

Some notes:

  • Every single solitary school I have data on had a statistically significant boost associated with LSAT scores.  While this is not surprising, the same can't be said (although barely) for the GPA, which I will post soon.

  • See Stanford in last place?  That does NOT mean that Stanford doesn't care about your LSAT score!  The groups of applicants applying to each school is going to be reasonably homogeneous, so Stanford being last in the list just means that, relative to other schools, Stanford places less weight on the LSAT in drawing distinctions between largely similar applicants.  

  • Eventually, I would love to figure out WHY schools give different boosts for the LSAT, using the numbers from the table as the dependent variable.  Off the top of my head, with six of the bottom ten spots being occupied by T14 schools, I'm thinking USNWR rank might be a good independent variable to include in the model.  If anyone can think of anything else (measurable) that I might include, let me know!

As always, feedback and input is not only welcome, but encouraged.

Wednesday, July 10, 2013

Cornell profile up, plans

By request, I have posted a profile for Cornell.  In looking at the numbers for Cornell (and yesterday for Penn) it strikes me that some of the numbers are sometimes different for splitters than non-splitters and reverse-splitters, namely the boost associated with applying earlier (splitters get a much bigger boost at both of those schools for earlier applications).  It might take a while, but I'm going to take a closer look at this for all schools eventually, because I think it's information that matters.

Tuesday, July 9, 2013

New school profiles

Hey all!  Thanks for continuing to check out this blog, and special thanks to the people who have sent requests through that form over on the right.  I'm trying to work through some issues, and continue processing data, but I did want to alert you that NYU and Penn profiles are now up, and they are listed in the right sidebar.

Saturday, July 6, 2013

Non-URM and URM Medians

In response to a special request, I have compiled a list of medians by school (at least the schools that I have data for) for both URM and non-URM students.  Unlike the published medians that schools release (which report the medians for attending students), these are the medians for accepted students at the school, and are based solely on the applicant-reported data on Law School Numbers.  The usual caveats apply.

The first table is simply a list of the schools, alphabetically (or roughly alphabetically) with the LSAT and GPA medians for both non-URM as well as URM accepted applicants (these are only accepts, and not waitlisted students who never reported a final decision):

Non-URM and URM LSAT and GPA Medians

The next two tables list, respectively, the LSAT and GPA differences between non-URM and URM applicants, from largest to smallest:

LSAT Differential

GPA Differential

There's really not much to add to the raw data here, except to say that, while the median LSAT for non-URM candidates is invariably higher than it is for URM candidates, 10% of the schools I have data for actually have higher URM GPA medians than non-URM medians, which is kind of fascinating.  I'm interested in what anyone thinks might explain this.  I haven't really thought about it too much yet, and I don't have a lot of time at the moment, so give me a hand, will you?

Tuesday, June 4, 2013

Stanford Law profile and custom analysis requests

Hey all,

I have finished the profile for Stanford Law School, which is you can get by clicking on that thing you just read that said Stanford Law School, or by finding Stanford in the page list on the right.  It's  pretty interesting, and kind of confirms some of the conventional wisdom about Stanford in terms of its numbers preferences, so have a look.

Also, you might notice that I added a "custom analysis request" form over there on the right sidebar as well.  If you have a certain school that you're just dying to see profiled, or you have an idea of a concept you would like to see analyzed, I'll do my level best to make it happen as quickly as possible (or let you know if it's just outside the scope of my capabilities or the data).  Please hit me with any ideas you have, because that will make it easier to decide what to post here while I continue working on comparative analyses.


Saturday, June 1, 2013

Very basic T14 vs. Non-T14 breakdown

I have been plugging away processing the data, putting together school profiles as I get requests, and generally trying to think of the best way to approach this.  I definitely appreciate the feedback I've gotten so far, so keep it coming.

I did want to do something quick to post while I continue working, so I thought it might be interesting to take a quick look at factors as they play a role in the top 14 law schools as compared to the rest of the law schools outside the top 14.  I'm using the same Model 1 as always, in which I regress these independent variables (LSAT score, GPA, earlier month sent, URM status, non-traditional student applications, and female) on the dependent variable, which in this case is the decision result (acceptance, waitlist, or rejection). Results are below:


For those who are reading this blog for the first time, those percentages given correspond to the increase in the likelihood an applicant has of either

        • Being admitted as opposed to being waitlisted/rejected
        • Being admitted/waitlisted as opposed to being rejected
So, for instance, at a T14 school, you're 26% more likely to be admitted with a 173 LSAT as opposed to a 172 LSAT, all other factors being held equal.

Seems pretty clear that the T14 schools give much bigger boosts for numbers, earlier submission (want to get squared away earlier?), and both URM and female applicants (the URM boost is much bigger than that for the non-T14).  The only group of applicants that seems to fare better outside the T14 are non-traditional applicants, who get a little boost in the non-T14 but nothing at all in the T14.

Of course, there is plenty of variation within these two categories of schools, and that's what I'm working on.  But, I figured this would be worth looking at in the meantime.

Comments, feedback, questions, and requests welcome!

New school profiles added, thoughts on future projects

If you take a look over to the right, you'll see that in the past couple days I have added (by request) school profiles for Harvard, Columbia, and Yale.  If any of these schools interest you, please have yourself a look.

I continue to process that data and organize it, and hopefully shortly we'll be able to look at some comparisons.  I've been thinking about what I want o do with it, and here are some ideas I have come up with:

  • Looking at the approaches schools appear to have taken, both within the T14, and T14 vs. the rest, since applications started to crater a couple cycles ago.  Do numbers matter more or less?  How about everything else?  What larger trends do we see, and what school-specific approaches, if any?
  • Taking at least a superficial look at just how "epic" the predicted Epic Cycle was, and does this differ from tier to tier?
  • Broad comparisons of the importance of numbers for T14 vs. non-T14, and whether or not emphasis placed on numbers correlates with USNWR rankings.
  • I look at two basic models, one including waitlists and one excluding them, and I'm starting to see that for many schools, the boosts increase significantly in the admitted vs. rejected only model.  I'm interested in seeing if this gap correlates with the % of waitlist offers schools hand out, and seeing what we might deduce from that.
  • Taking a little more in depth look at what goes on for splitters, reverse-splitters, and ED applications at each school, possibly in order to develop a list of splitter and reverse-splitter friendly schools, which may be useful for those candidates who fall into one category or the other.  
  • Although it'll take some doing, I'd love to get a handle on scholarship awards and what factors play a role there.
  • Finally, I want to develop a page where I just explain how some of this stuff works, what I have done with the data, etc.  The question someone posted about splitters on the UC Berekely page really slapped me in the face with this necessity.

Okay, that's it for now.  I'll hopefully be adding more stuff soon, and in the meantime, I'll continue working so we can start to look at more interesting comparative analysis.  As always, I'm definitely open for suggestions and requests!

Thursday, May 30, 2013

University of Alabama Profile up, and a note about scholarship money analysis

Rather than stick all the school profiles here, which will really clutter up the page, I'm just going to make separate pages for each school.  So, you can find Alabama's (and the others, as I add them) on that page list over to your right.  See it?  Yeah, there it is.

Also, although I hope to be able to do some stuff with scholarship money, Rob from has pointed out to me that there are potentially some problems with the dataset that it will take a while to work through (if they're solvable at all), so that is taking a back burner for a while.

Wednesday, May 29, 2013

University of California Berkeley Profile

I am sorting through the data I have, going through schools alphabetically. It's going to be a long project, but I'm making progress day by day. For now, though, by request, I am posting a school-profile for the numbers crunching I did on the UC Berkeley data available from Law School Numbers.  Again, there are a variety of important factors to keep in mind when considering this analysis, including the possibility that some of that datapoints are completely bogus, and that the LSN-users might be skewed a little towards the top-end of applicants. Given the number of datapoints we have, though, and that I've done at least a little pre-emptive data cleaning, I think this is worth a look. The first table I present is the results of an ordered logistic regression, which allows me to use folks who reported being acceptances, rejections, and waitlists. Eventually I'll get around to creating descriptions for all of these types of analysis so I can just say "click here to see what I mean by this," but for now, let me just explain it. In an ordered logistic regression, the dependent variable is a categorical variable, and in our case I have coded an acceptance as a 2, a waitlist as a 1, and a rejection as a 0 (in descending order of desirability, although I know plenty of waitlisted candidates who cry that they'd rather just be put out of their misery with a rejection than suffer in purgatory). The data you see below gives the percentage-increase in the likelihood, other variables controlled for, of being either:
      • Accepted rather than (combined waitlisted/rejected)
      • (Combined accepted/waitlisted) rather than rejected
I know, I's a little confusing.  By way of example: in this model for Berkeley, for two otherwise identical candidates, an additional point on the LSAT increases an applicant's chances of being accepted rather than either waitlisted or rejected by 29.3% (in other words, a 170 is 29.3% more likely than a 169 to get accepted rather than waitlisted or rejected , all else being equal).  A 170 is also 29.3% more likely to get either accepted or waitlisted than rejected than is a 169, all else equal.  If you have any questions, just e-mail me or something, and I'll try to explain better.  Or, check out this link to the awesome UCLA stats site. 

In any case, here are the results from this first model, in which I test the impact of LSAT score, GPA, each earlier month the application is sent (and by earlier month, I don't mean month earlier...I mean September vs. October, or October vs. November), URM status, non-traditional status, and female applicant status.  Since Berkeley doesn't have a binding-ED option, I left it off.  Everything is based on LSN data from the 2003/2004 cycle to the present.


The thing that really stands out to me here, although it may be hard to see if you haven't yet seen what these numbers look like for other schools, is the emphasis placed on GPA vs. LSAT.  At least judging by the schools I have looked at so far, Berkeley gives a lot more weight to the GPA vis-a-vis the LSAT than other schools do.  That boost for a .10-point increase in GPA is massive, and the LSAT boost is relatively small compared to most schools.  The boost for each month earlier the application is sent is also very substantial - in fact, many schools seem to give no boost for this.  The URM boost is pretty substantial, too, but one thing you have to keep in mind when interpreting this is that there are almost certainly different "floors" for LSAT and GPA for URM applicants than non-URM applicants.  This matters because, while the "boost" indicates that, all else equal, a URM is almost nine times as likely to get in as a non-URM applicant, this is inflated a bit because below the numbers "floors" for non-URM applicants, a URM is pretty much infinitely more likely to get in.  The increases for non-traditional applicants and female applicants surprised me a little, too.  The final numbers - URM equivalents in LSAT and GPA points - is simply the number of extra LSAT points a non-URM candidate would have to have to a boost equivalent to that of URM status.  It's an interesting way to look at how much the "URM bump" is really worth.

The next model excludes waitlists, including only applicants that reported either being accepted or rejected (whether that was directly, or after first being waitlisted).  The results in the table should be interpreted in the same way, but the interpretation is a little easier.  The number given for each variable is simply the increase in likelihood of being accepted rather than rejected.


Not a whole lot of difference between the two models, although there are slightly bigger boosts for most of the factors if we just consider acceptances vs. rejections, without considering waitlists (which are problematic both for yield-protection reasons - more prevalent at some schools than others, to be sure - and because a lot of "waitlist" profiles belong to people who didn't bother to update with a final status, so we can't just treat them as rejections).

Normally, the next thing I'd do here is take a look at how different factors influence scholarship awards, but because the number of observations for Berkeley is so low, and because it seems like a couple of datapoints really throw the whole thing off due to the small sample size, I'm going to leave that out.  If you're really interested, and promise to not read too much into it, e-mail me and I'll let you know.

Last, I'm including a table that breaks down how non-splitters, splitters, and reverse-splitters are represented in the data.  This one you really have to be careful with, because the data on LSN does skew towards higher-caliber applicants, and so acceptances are more highly represented than they are in the applicant pool.  Really, the value of this kind of thing will become more clear when we can compare schools, because that same "higher-caliber" applicant caveat will apply across the board, so we can probably draw somewhat valid conclusions by comparing schools.  For now, I'll include it for interested parties, but please do not look at this and say to yourself, "Self, as a splitter I have an X% chance of getting into Boalt!"  Promise?  Ok!

So there you have it.  I'm interested in thoughts anyone has.  For me, the real takeaways of this entire thing is that it pays big to apply to Berkeley as early as you possibly can, and that it's a pretty friendly place for non-traditional students and female applicants.  Also, the relative weight Berkeley gives to the LSAT and GPA is different from what we usually see, so if your LSAT isn't up to snuff but you've got a stellar GPA, it might be worth throwing a hail-mary Berkeley's way.