Hacker Newsnew | past | comments | ask | show | jobs | submit | singron's commentslogin

Their options should be priced lower, but the common stock isn't valued according to the $5.15B. They raised $300M at $12B and $425M at $7.4B, which are both under water, so those shareholders will use their liquidation preference to get paid at least 1x. Assuming those rounds owned 7% of the company, there is at most $4.4B left for the remaining 93% of shareholders. That's about 8% less. If they deducted fees, legal services, or retention packages or had worse liquidation preferences or more underwater rounds, then it gets even lower.

You have to exercise the options or let them expire. You normally have 10 years not 7, but if a company comes up on 10 years after they issued their first options, they might try a tender offer to buy some employee shares. If your 10 year old "start up" shares can't be sold anywhere, then they probably aren't worth exercising. A company that can't provide liquidity to employees for 10 years will probably never do it.

ISO options have to expire within 10 years of when they are granted. Sometimes companies make them expire earlier than that, so OP might be thinking of options they were granted. E.g. I once had options that expired 30 days after ending employment even thought the ISO requirement is up to 90 days.

PG does reuse plans, but only if you prepare a query and run it more than 5 times on that connection. See plan_cache_mode[0] and the PREPARE docs it links to. This works great on simple queries that run all the time.

It sometimes really stinks on some queries since the generic plan can't "see" the parameter values anymore. E.g. if you have an index on (customer_id, item_id) and run a query where `customer_id = $1 AND item_id = ANY($2)` ($2 is an array parameter), the generic query plan doesn't know how many elements are in the array and can decide to do an elaborate plan like a bitmap index scan instead of a nested loop join. I've seen the generic plan flip-flop in a situation like this and have a >100x load difference.

The plan cache is also per-connection, so you still have to plan a query multiple times. This is another reason why consolidating connections in PG is important.

0: https://www.postgresql.org/docs/current/runtime-config-query...


Yes manual query preparation by client [1] is what you did in MSSQL server up until v7.0 I believe, which was 1998 when it started doing automatic caching based on statement text. I believe it also cached stored procedures before v7.0 which is one reason they were recommended for all application code access to the database back then.

MSSQL server also does parameter sniffing now days and can have multiple plans based on the parameters values it also has a hint to guide or disable sniffing because many times a generic plan is actually better, again something else PG doesn't have, HINTS [2].

PG being process based per connection instead of thread based makes it much more difficult to share plans between connections and it also has no plan serialization ability. Where MSSQL can save plans to xml and they can be loaded on other servers and "frozen" to use that plan if desired, they can also be loaded into plan inspection tools that way as well [3].

1. https://learn.microsoft.com/en-us/sql/relational-databases/n...

2. https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-tr...

3. https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-tr...


PostgreSQL shares other caches between processes so they probably could have a global plan cache if they wanted. I wonder why they don’t though.

One possible reason is that the planner configuration can be different per connection, so the plans might not transfer


In MSSQL Server part of the plan match is the various session/connection options, if they are different there are different plans cached.

I believe the plan data structure PG is intimately tied to process space memory addresses since it was never thought to share between them and can even contain executable code that was generated.

This makes it difficult to share between processes without a heavy redesign but would be a good change IMO.


> PostgreSQL shares other caches between processes so they probably could have a global plan cache if they wanted. I wonder why they don’t though.

> One possible reason is that the planner configuration can be different per connection, so the plans might not transfer

That's part of it, another big part is that the transactional DDL makes it more complicated, as different sessions might require different plans.


It looks like there are two JS backends: quickjs and vm-js (vendor/ecma-rs/vm-js), based on a brief skim of the code. There is some logic to select between the two. I have no idea if either or both of them work.

If you are doing continuous profiling, you are probably using a low overhead stack sampling profiler rather than recording every method entry and exit.

That's a fair point. It really depends. For example, if you're recording method run times via an observability SDK at full fidelity, this could be an issue.

You can still make that bet at 10% "yes" for the next year. Previous years had similar patterns, so it's not a reaction to Trump.

To be fair, we don't need to find little green men in a UFO. It's sufficient to e.g. find fossils of extinct microorganisms on Mars, which is a slim possibility that's a goal of the Mars Sample Return mission.

These markets also have low volume at reasonable prices. If you bought $10K of "no" right now for next year, you would only get an 8% return, not 10%. You could execute better trades to get better prices, but the odds also become more sane over the year. The S&P 500 is also up 18% YTD (13% YoY for the last 5) and you can buy as much of that as you want.


I don't believe that fossils of microorganisms were counted in the resolver, but the ambiguities of Polymarket are definitely something to be wary of if the resolutions aren't well defined.

To your last point, I'd argue that the S&P 500 has way more risk. Bets for insane stuff like this where a sufficient number of morons are believers in the obviously-not-going-to-happen outcome are the ones that act like CDs.


Isn't that the steam linux runtime? Games linked against the runtime many years ago still run on modern distros.


This is a completely obvious conclusion with an unexpected definition of "effort" to justify a click-bait title.


Except that the conclusion is wrong because you need tolerance. A bridge is designed to tolerate a certain weight, then you factor in some large tolerance for special circumstances, the same is true of effort.

You put more effort into your team presentation just in case there are guests. You cannot suddenly have a better presentation instantaneously when you arrive and see the CTO. In sports, such as bouldering, you will grip a hold slightly harder than strictly required in case you suddenly slip or just to easily accommodate the dynamics necessary as you shift your weight without requiring ultra precision which is a different form of effort.

The additional effort you expend is based on your estimation of the risk. As you master whatever skill it is, then you are better able to estimate the risks and the need or lack thereof for additional effort. Novices expend more effort than masters because they cannot gauge the need, but they will also make more mistakes by correctly guessing the correct effort but not accommodating for the risk.

The appropriate (over)effort is never 0 because there is always some context dependent risk.


Right?

Such a clear fallacy of definition in the opening paragraphs that it renders the rest of the article a pointless read.

Yes, if you arbitrarily redefine terms you can reach arbitrary conclusions.


https://gitlab.freedesktop.org/wayland/wayland-protocols/-/t...

This is an incomplete list of protocols that aren't part of core Wayland. Compositors implement additional protocols that aren't even part of this process (e.g. wlr-screencopy-unstable). See the wlroota protocols here: https://gitlab.freedesktop.org/wlroots/wlroots/-/tree/master...


Right but there's the xdg-portal for screen capture which runs through Pipewire and supports sand-boxing (because its negotiated over dbus), which all the main compositors support.

Just because a protocol isn't part of Wayland, doesn't mean a standard protocol does not exists.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: