Hacker Newsnew | past | comments | ask | show | jobs | submit | tcoder's commentslogin

Extreme sampling bias.

Startup has opening -> Can't fill it with people in their network -> Can't fill it with people who come to them -> Resorts to Angel List posting -> Angel list posting stays online longer because they have trouble filling it -> high turnover means the job ad stays up forever or keeps getting reposted.

Great jobs tend to skew toward the left side of that funnel = sampling bias.


Pretty much. Another major assumption here is that Angel List postings are legitimate - even though some companies post "job openings" online to vacuum up resumes for a database.


Correct code link:

https://github.com/chuanli11/MGANs

Collection of other implementations of this feedforward neural style transfer approach:

https://tensortalk.com/?cat=feedforward-neural-style-transfe...

Or, regular neural style transfer:

https://tensortalk.com/?cat=neural-style-transfer


Namedtuple CPU-speed performance for some common operations is TERRIBLE compared to dictionaries, at least on 2.7. Orders of magnitude difference. So bad that it really matters. It's shocking how bad the implementors got this.


namedtuple is just a thin wrapper around tuple. It literally constructs a class definition that sub-classes tuple as a string and then executes it. You can see the string template that it uses here[1]. If you're interested in something like namedtuple there are other[2] things[3] you can use depending on your use-case.

[1] https://github.com/python/cpython/blob/master/Lib/collection...

[2] http://stackoverflow.com/a/2648186

[3] https://pypi.python.org/pypi/frozendict/


It's really weird that immutable tuples take a performance hit, compared to mutable lists (almost 50% on access!?!).


That's because Python 2 has a special case for lists[1]. This does not exist in Python 3[2] (and therefore neither does the performance difference).

[1] https://github.com/python/cpython/blob/2.7/Python/ceval.c#L1...

[2] https://github.com/python/cpython/blob/master/Python/ceval.c...


It looks like list access got slower as a result though.


Curious what the operations you're talking about are. Get?


Pickling, for one.

  >>> nt = namedtuple('nt', 'a b')
  >>> nt10k = [nt(1, 2) for i in range(10000)]
  >>> dict10k = [{'a':1, 'b':2} for i in range(10000)]
  >>> timeit pickle.dumps(nt10k)
  10 loops, best of 3: 26.3 ms per loop
  >>> timeit pickle.dumps(dict10k)
  100 loops, best of 3: 2.49 ms per loop


If you look at the the StackOverflow in my sibling comment to yours, running 'obj.attrname' on a namedtuple takes longer because it needs to translate 'attrname' to an integer index and then run (e.g.) 'obj[0]'. Outside of that, I'm not sure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: