He agreed to specific performance, which is what they are asking for. Rumors are that it will be the head judge of chancery court, a judge which previously has enforced specific performance.
There's a good treatment of this in the WSJ. I have no dog in the fight, but they make a convincing argument that it's unlikely the court will order specific performance. It's effectively unenforceable if they do order it (assuming this isn't want Musk actually wants). Courts never want to issue orders that can be ignored.
They aren't asking for damages, they are asking for specific performance, which Musk agreed to as a term of the deal.
So what? Specific performance appears in the vast majority of deals, still an extremely rare remedy and almost never provided for something this complex.
Did you read the contract?
Yes, and I've read expert opinions on it. I wouldn't count on my personal interpretation since I'm well aware I can overlook things.
That's helpful to know, thanks.
Lol, you ignored the point. There's no way that its a "winner" to claim that Musk misrepresented what they are doing if they are in fact randomly sampling 100 accounts a day. They may have more success pushing that it was a violation of the NDA to disclose it.
It is a material misrepresentation and a breach of contract.
Not sure what you mean. There's no way its material under any legal theory of materiality. It could be a breach of an NDA obligation or a non-disparagement clause (but not both at once), but that's about it.
It's also funny to argue that it's a material misrepresentation in a case that may turn on whether Twitter's 5% claim is a material misrepresentation. The claim about 100 accounts is light years away in magnitude from the 5% claim, and it's very likely the court would hold that the 5% isn't material even if it turns out to be very wrong (and possibly even if the Twitter board had reason to know it was wrong).
They do if this was a simple and well constructed study. They may or may not in reality have anything with any real validity at all (even assuming they get good data out of the reviews). The fact that you express so much certainty means you're either repeating their claims (which have not been independently verified) or misrepresenting your own knowledge. Can you confirm which it is that you're doing?
I'm assuming they have competent people. It may be they have incompetent or corrupt individuals. My point was mostly that 100 per day is plenty to get accurate and meaningful results, when they are aggregating the 100 daily samples into a 9000 sample for a quarterly result.
My point is you missed the point. Corporate compliance is not designed to be an academic study. There's virtually no chance that it doesn't have material issues in design and implementation that make the conclusions suspect and that invalidate the degree of certainty.
I wasn't even including the risks associated with corruption. In reality though there's a real possibility that the "manual process," which this part is (as opposed to the automatic process that Twitter claims scrubs over a million accounts a day), generates bad date before it even feeds into the model. Please don't forget how often individuals falsify results (sometimes for nefarious reasons, sometimes because they don't want to do the work), misinterpret conclusions or just do a poor job. These compliance officers are paid to generate a confirmation of the 5% claim, not to rock the boat, and there are certainly strong incentives on them to get the "right" answer.
It isn't sophistry - it is a very important distinction. Sample size of 100 would have a large variance, sample size of 9000 has very little variance.
I get the basic principal there but the devil is in the details. If it's a daily sample, then it's really a sample size of 100 repeated 90 times. Depending on how the sample is drawn, like for example from all active accounts versus newly created accounts, it could introduce or hide a whole lot issues. If it's pulling all 9000 at once (which seems unlikely) then how it's doing it is still a question.
It really doesn't matter though, the point is not whether there's enough of a sample being taken (the sample size is adequate for a study - the real questions are more likely to be about whether the data is valid), the point is whether referring to the sample as 100 - assuming for the moment that they actually take 90 samples of 100 - is incorrect. There's no way that it would be and honestly there's no way that a judge would conclude it was.