Assuming your leveling matches standard bigtech leveling, it's generally expected at level 7+ that you are doing lots of cross-org work anyway, and have responsibilities at quite a high level. Unless you're one of those rare engineers at this level who is a deep specialist (in which case this team move scenario sounds unlikely), the value you add above someone at level 6 is just that you have breadth of experience and can lead cross-org and/or cross-functional initiatives.
No, nobody is ever fully in charge of his or her own destiny, but the entire point of senior staff engineers is that you have the autonomy to exercise protagonism separate from your org structure, in ways that managers and directors do not. So... do the cross-org collaboration thing – and not because it's what you feel like, but because as a L7, it's literally your job to do that!
Good point -- while now, since I'm new working on this team and having to re-establish myself, I feel a big loss in scope, influence, and visibility, I think over time I'll be working on more and more cross-cutting projects. I'm starting to see the seeds of that already.
Good luck! Those were the behaviors that got you promoted to senior staff in the first place – so I would imagine that your org is actively expecting that you will continue them!
The problem with this approach is that it requires the system doing randomization to be aware of the rewards. That doesn't make a lot of sense architecturally – the rewards you care about often relate to how the user engages with your product, and you would generally expect those to be collected via some offline analytics system that is disjoint from your online serving system.
Additionally, doing randomization on a per-request basis heavily limits the kinds of user behaviors you can observe. Often you want to consistently assign the same user to the same condition to observe long-term changes in user behavior.
This approach is pretty clever on paper but it's a poor fit for how experimentation works in practice and from a system design POV.
I don't know, all of these are pretty surmountable. We've done dynamic pricing with contextual multi-armed bandits, in which each context gets a single decision per time block and gross profit is summed up at the end of each block and used to reward the agent.
That being said, I agree that MABs are poor for experimentation (they produce biased estimates that depend on somewhat hard-to-quantify properties of your policy). But they're not for experimentation! They're for optimizing a target metric.
Surmountable, yes, but in practice it is often just too much hassle. If you are doing tons of these tests you can probably afford to invest in the infrastructure for this, but otherwise AB is just so much easier to deploy that it does not really matter to you that you will have a slightly ineffective algo out there for a few days. The interpretation of the results is also easier as you don't have to worry about time sensitivity of the collected data.
You do know Amazon got sued and lost for showing different prices to different users? That kind of price discrimination is illegal in the US. Related to actual discrimination.
I think Uber gets away with it because it’s time and location based, not person based. Of course if someone starts pointing out that segregation by neighborhoods is still a thing, they might lose their shiny toys.
You can do that, but now you have a runtime dependency on your analytics system, right? This can be reasonable for a one-off experimentation system but it's not likely you'll be able to do all of your experimentation this way.
No, you definitely have to pick your battles. Something that you want to continuously optimize over time makes a lot more sense than something where it's reasonable to test and the commit to a path forever.
Hey, I'd love to hear more about dynamic pricing with contextual multi-armed bandits. If you're willing to share your experience, you can find my email on my profile.
You can assign multiarm bandit trials on a lazy per user basis.
So first time user touches feature A they are assigned to some trial arm T_A and then all subsequent interactions keep them in that trial arm until the trial finishes.
careful when doing that though!
i've seen some big eyes when people assumed IDs to be uniform randomly distributed and suddenly their "test group" was 15% instead of the intended 1%.
better generate a truely random value using your languages favorite crypto functions and be able to work with it without fear of busting production
That article is mostly about speed. The following seems like the one thing that might be relevant:
> Naively, you could take the random integer and compute the remainder of the division by the size of the interval. It works because the remainder of the division by D is always smaller than D. Yet it introduces a statistical bias
That's all it says. Is the point here just that 2^31 % 17 is not zero, so 1,2,3 are potentially happening slightly more than 15,16? If so, this is not terribly important
It is not uniformly random, which is the whole point.
> That article is mostly about speed
The article is about how to actually achieve uniform random at high speed. Just doing mod is faster but does not satisfy the uniform random requirement.
additional to the other excellent comments they will become non-uniform once you start deleting records. that will break all hopes you might have had in modulo and percentages being reliable partitions because the "holes" in your ID space could be maximally bad for whatever usecase you thought up.
And in that sense Tillich isn't that far from, say, Aquinas, who is consistent about asserting that existence is not a "real" predicate and that God's existence is outside of the world and outside of space and time.
You don't even need to squint that hard to see a commonality between Tillich's notion of discussing God symbolically and Aquinas's notion of doing so analogically, not to mention the contrast between finite humans and an infinite God who is beyond understanding. And not to mention that apophaticism – the idea that positive knowledge about God is impossible – has been a feature of Christian theology since the beginning.
So much of this can be taken in ways that not only aren't outside the bounds of Christian orthodoxy, but also align with more sophisticated Christian philosophical understandings of God.
That much, of course, is not why Tillich is controversial!
I'm guessing a sloppy equivalence due to the idea of "existence is outside of the world and outside of space and time" showing up in descriptions (e.g. "a platonist might assert that the number pi exists outside of space and time" from https://iep.utm.edu/mathplat/).
To be more specific: if the Universe (existence itself) can be described by mathematics, and mathematics is timeless (beyond mere physical existence), then essentially the physical Universe is inevitable and in some sense "created by" mathematics. In this view of creation, mathematics plays the role of God.
It's different from the MEMS-based devices this article talks about, which have the novelty of letting you do everything with a single probe. Though of course that comes with trade-offs.
You're not allowed to buy these without an NPI number. And while ultrasound is relatively easy, it's still not really usable without some training. I think there have been some studies of training users to do ultrasound at home for a limited set of views, but it's not really a "pick it up and look around" sort of thing.
I'm not sure why this piece mentions deregulation. The drinks limit was in 1956, while the sandwich spat was in 1958 – but deregulation wasn't until 1978, two decades later!
The collusion in question seems to only relate to regulation, which prohibited the airlines from competing on price. The incentives obviously don't work out the same with deregulation!
It mentions deregulation because it is challenging a purported assumption that the only airline regulations before 1978 were those set by government.
Arguably it was the government regulation against competing on price that led to the industry regulations on food service. Airlines tried to compete on other bases so food service was much better than what it is today.
It seems that, on paper, in this case specifically, re-engining rather than clean sheet made a lot of sense. Of course, we all know how things ended up in practice...
But at this point, if Boeing were to spend a lot of money on a clean-sheet design – even if they shipped it on time, would they have customers? It's hard to see how that would play out.
It's also not entirely a technical issue, anyway. In a vacuum the Sourcehut UX might be fine, but if people are used to GitHub-style UX, then they will have a hard time with Sourcehut and end up doing the wrong thing, like emailing the maintainer directly rather than using the mailing lists – through no fault of the mailing lists themselves!
No, nobody is ever fully in charge of his or her own destiny, but the entire point of senior staff engineers is that you have the autonomy to exercise protagonism separate from your org structure, in ways that managers and directors do not. So... do the cross-org collaboration thing – and not because it's what you feel like, but because as a L7, it's literally your job to do that!