Hacker Newsnew | past | comments | ask | show | jobs | submit | taion's commentslogin

Assuming your leveling matches standard bigtech leveling, it's generally expected at level 7+ that you are doing lots of cross-org work anyway, and have responsibilities at quite a high level. Unless you're one of those rare engineers at this level who is a deep specialist (in which case this team move scenario sounds unlikely), the value you add above someone at level 6 is just that you have breadth of experience and can lead cross-org and/or cross-functional initiatives.

No, nobody is ever fully in charge of his or her own destiny, but the entire point of senior staff engineers is that you have the autonomy to exercise protagonism separate from your org structure, in ways that managers and directors do not. So... do the cross-org collaboration thing – and not because it's what you feel like, but because as a L7, it's literally your job to do that!


Good point -- while now, since I'm new working on this team and having to re-establish myself, I feel a big loss in scope, influence, and visibility, I think over time I'll be working on more and more cross-cutting projects. I'm starting to see the seeds of that already.


Good luck! Those were the behaviors that got you promoted to senior staff in the first place – so I would imagine that your org is actively expecting that you will continue them!


The problem with this approach is that it requires the system doing randomization to be aware of the rewards. That doesn't make a lot of sense architecturally – the rewards you care about often relate to how the user engages with your product, and you would generally expect those to be collected via some offline analytics system that is disjoint from your online serving system.

Additionally, doing randomization on a per-request basis heavily limits the kinds of user behaviors you can observe. Often you want to consistently assign the same user to the same condition to observe long-term changes in user behavior.

This approach is pretty clever on paper but it's a poor fit for how experimentation works in practice and from a system design POV.


I don't know, all of these are pretty surmountable. We've done dynamic pricing with contextual multi-armed bandits, in which each context gets a single decision per time block and gross profit is summed up at the end of each block and used to reward the agent.

That being said, I agree that MABs are poor for experimentation (they produce biased estimates that depend on somewhat hard-to-quantify properties of your policy). But they're not for experimentation! They're for optimizing a target metric.


Surmountable, yes, but in practice it is often just too much hassle. If you are doing tons of these tests you can probably afford to invest in the infrastructure for this, but otherwise AB is just so much easier to deploy that it does not really matter to you that you will have a slightly ineffective algo out there for a few days. The interpretation of the results is also easier as you don't have to worry about time sensitivity of the collected data.


You do know Amazon got sued and lost for showing different prices to different users? That kind of price discrimination is illegal in the US. Related to actual discrimination.

I think Uber gets away with it because it’s time and location based, not person based. Of course if someone starts pointing out that segregation by neighborhoods is still a thing, they might lose their shiny toys.


Indeed, we are well aware.


You can do that, but now you have a runtime dependency on your analytics system, right? This can be reasonable for a one-off experimentation system but it's not likely you'll be able to do all of your experimentation this way.


No, you definitely have to pick your battles. Something that you want to continuously optimize over time makes a lot more sense than something where it's reasonable to test and the commit to a path forever.


Hey, I'd love to hear more about dynamic pricing with contextual multi-armed bandits. If you're willing to share your experience, you can find my email on my profile.


You can assign multiarm bandit trials on a lazy per user basis.

So first time user touches feature A they are assigned to some trial arm T_A and then all subsequent interactions keep them in that trial arm until the trial finishes.


The systems I’ve use pre-allocate users effectively randomly an arm by hashing their user id or equivalent.


To make sure user id U doesn’t always end up in eg control group it’s useful to concatenate the id with experiment uuid.


How do you handle different users having different numbers of trials when calculating the "click through rate" described in the article?


careful when doing that though! i've seen some big eyes when people assumed IDs to be uniform randomly distributed and suddenly their "test group" was 15% instead of the intended 1%. better generate a truely random value using your languages favorite crypto functions and be able to work with it without fear of busting production


The user ID is non uniform after hash and mod? How?


If you mod by anything other than a power of two, it won't be. https://lemire.me/blog/2019/06/06/nearly-divisionless-random...


That article is mostly about speed. The following seems like the one thing that might be relevant:

> Naively, you could take the random integer and compute the remainder of the division by the size of the interval. It works because the remainder of the division by D is always smaller than D. Yet it introduces a statistical bias

That's all it says. Is the point here just that 2^31 % 17 is not zero, so 1,2,3 are potentially happening slightly more than 15,16? If so, this is not terribly important


> If so, this is not terribly important

It is not uniformly random, which is the whole point.

> That article is mostly about speed

The article is about how to actually achieve uniform random at high speed. Just doing mod is faster but does not satisfy the uniform random requirement.


If your number of AB testing combos cohorts is fewer then 100 then yeah this passes for being uniform


It doesn't, mathematically. It might be good enough for some cases, but it is not good enough for cases that actually require uniformity.


additional to the other excellent comments they will become non-uniform once you start deleting records. that will break all hopes you might have had in modulo and percentages being reliable partitions because the "holes" in your ID space could be maximally bad for whatever usecase you thought up.


Just make sure you do the hash right so you don’t end up with cursed user IDs like EverQuest.


And in that sense Tillich isn't that far from, say, Aquinas, who is consistent about asserting that existence is not a "real" predicate and that God's existence is outside of the world and outside of space and time.

You don't even need to squint that hard to see a commonality between Tillich's notion of discussing God symbolically and Aquinas's notion of doing so analogically, not to mention the contrast between finite humans and an infinite God who is beyond understanding. And not to mention that apophaticism – the idea that positive knowledge about God is impossible – has been a feature of Christian theology since the beginning.

So much of this can be taken in ways that not only aren't outside the bounds of Christian orthodoxy, but also align with more sophisticated Christian philosophical understandings of God.

That much, of course, is not why Tillich is controversial!


So… God is mathematics?


Don't know where you got that from.


I'm guessing a sloppy equivalence due to the idea of "existence is outside of the world and outside of space and time" showing up in descriptions (e.g. "a platonist might assert that the number pi exists outside of space and time" from https://iep.utm.edu/mathplat/).


Yes!

To be more specific: if the Universe (existence itself) can be described by mathematics, and mathematics is timeless (beyond mere physical existence), then essentially the physical Universe is inevitable and in some sense "created by" mathematics. In this view of creation, mathematics plays the role of God.

See also: https://en.wikipedia.org/wiki/Permutation_City


That one is not going to be particularly useful at all. It's a linear probe, which means it's really only for imaging things near the skin. It's one-third of a typical set of probes: https://stanfordmedicine25.stanford.edu/the25/ultrasound.htm...

It's different from the MEMS-based devices this article talks about, which have the novelty of letting you do everything with a single probe. Though of course that comes with trade-offs.


Yes, they still require gel. That's just a matter of physics. Changing the transducer technology doesn't change that.


You're not allowed to buy these without an NPI number. And while ultrasound is relatively easy, it's still not really usable without some training. I think there have been some studies of training users to do ultrasound at home for a limited set of views, but it's not really a "pick it up and look around" sort of thing.


A big part of the point of that Construction Physics Substack is to explore these factors around innovation, productivity, and process change in the construction sector. I don't think it's really quite as simple as you imply: https://www.construction-physics.com/p/why-its-hard-to-innov..., https://www.construction-physics.com/p/sketch-of-a-theory-of...

Sometimes practices do reflect real constraints, rather than just path-dependence.


I'm not sure why this piece mentions deregulation. The drinks limit was in 1956, while the sandwich spat was in 1958 – but deregulation wasn't until 1978, two decades later!

The collusion in question seems to only relate to regulation, which prohibited the airlines from competing on price. The incentives obviously don't work out the same with deregulation!


It was kind of a weirdly written intro, but I think it was meant to set up "here are some examples of how ridiculous things were before deregulation".


It mentions deregulation because it is challenging a purported assumption that the only airline regulations before 1978 were those set by government.

Arguably it was the government regulation against competing on price that led to the industry regulations on food service. Airlines tried to compete on other bases so food service was much better than what it is today.


The Construction Physics Substack had a good piece on the economics of building commercial aircraft a bit ago: https://www.construction-physics.com/p/a-cycle-of-misery-the..., with discussion on HN here: https://news.ycombinator.com/item?id=39339149.

It seems that, on paper, in this case specifically, re-engining rather than clean sheet made a lot of sense. Of course, we all know how things ended up in practice...

But at this point, if Boeing were to spend a lot of money on a clean-sheet design – even if they shipped it on time, would they have customers? It's hard to see how that would play out.


This was a fabulous read, thanks for sharing it.


It's also not entirely a technical issue, anyway. In a vacuum the Sourcehut UX might be fine, but if people are used to GitHub-style UX, then they will have a hard time with Sourcehut and end up doing the wrong thing, like emailing the maintainer directly rather than using the mailing lists – through no fault of the mailing lists themselves!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: