Hacker Newsnew | past | comments | ask | show | jobs | submit | sirseal's commentslogin

Wow! This is some fantastic stuff. To be (slightly) cynical, I wonder how much code reuse one could get out of this _in practice_. At any rate, I think it's a win to be able to do all of this kind of coding within the same language. Also, being able to bring functional programming, immutability, and types to these different environments is a sure win!


At the very least, you could reuse model code and some (most?) business logic, and likely the data layer as well. Even if you don't reuse any UI code (which seems unlikely), that's still significant.

Edit: In the long run, though, this type of approach opens up the opportunity to abstract low-level differences between platforms in a way that still uses native UI/functionality under the hood. For instance, you could make your high-level components generic enough to be used on any platform, and only change the implementation details per platform. You can do the same with plenty of other APIs as well (local data storage, location, etc).


I've been using 2 distinct languages for FE and BE from the very beginning of my project, and I think that's one the best decisions we made.

It is true, you have to duplicate your models i.e. physically write the same code in 2 languages, but with ORMs this is pretty fast. At later stages, the amount of dup code is always less and less. On the pros-side, this intrinsically prevents you from taking shortcuts, leading to better separation of duties.

It depends on how big you think your project will become, I guess. :)


Taking a task that a JavaScript developer could do and putting a dependency on them being a Scala developer as well isn't a win imo (as a Scala/Javascript dev)


I fully agree. I've done minor things in Java and I've been doing PHP for about 3 years. I've picked up React and I've been trying to get into Scala. It's only been three days, but I think it's not for the faint of heart. I can think of a number of solid front-end devs whose lives would be hell if you threw them into Scala (as an aside, I actually like what I've learned so far, though I wish their 'documentation' was not terrible, though I imagine it's probably done that way since it would be too hard to keep it up to date)


Thanks for writing this up! How'd you pay for this? Did you have to take out a loan? Did the companies have a payment schedule?


You can check out his website, it has crazy detailed info about his earnings. I kinda feel like a creeper just looking at it. :/


We had to prepay before they would do the work, $2.5k at a time. They've probably been burned before by people who can't pay, and I can see why they would avoid that.

We live well below our means so we can donate more, so we did have some savings.


Wow. That's crazy expensive. I'm glad to hear about this story. I don't think I'd pay $50k for something like this.


No.


What's the difference?


For example, if you calculate the magnitude of a thousand-dimensional vector, you end up with a single scalar. If you calculate the magnitude of a thousand one-dimensional vectors, you'd end up with a thousand scalars. Additionally, if given a thousand-dimensional vector, the ordering is important, whereas if given a thousand one-dimensional vectors, ordering isn't neccesarily something we know about.

The difference is subtle and occasionally pedantic, but can be very important depending on what exactly one is doing.


I'll give a try...

Imagine three, 4-by-1 vectors, each "one-dimensional". Twelve total scalars, each vector with four rows and one column. Arrange these three vectors side by side and merge them into a single 4-by-3 matrix. This matrix is "two-dimensional".

Now, let's imagine five such matrices, each 4-by-3. Stack the five matrices one on top of the other. We currently have a 4x3x5 matrix. This matrix, which contains 60 scalars, is "3-dimensional".

Repeat a similar exercise 997 more times and you have a 1000-dimensional matrix.

Compare that matrix to this: 1000 of our original 4-by-1 vectors arranged side by side, which gives a 4x1000 matrix, which is simply a "two-dimensional" matrix with 4000 elements.


A vector with four rows and one column is a four-dimensional vector. A one-dimensional vector can be described with a single number, a 1x1 matrix if you like.


Oops. You're correct: a 2x1 vector is two-dimensional, a 3x1 vector is three-dimensional, etc. <Trying to remember the terminology from linear algebra 15 years ago.> Each element of the mx1 vector represents a magnitude along an orthogonal dimension ('scalars' for a set of 'basis vectors'). So then a 1000x1 vector would be "thousand-dimensional"; each element represents a magnitude along an axis. But is this strictly equivalent to 1000 single-dimensional vectors? `eli173 suggests not, and I agree.

In constructing my incorrect answer in the grand-parent comment, my though process was being guided by the way Matlab/numpy treats these items (and I think I'm on solid ground that Matlab/numpy treat them differently because mathematicians consider them differently). The built-in functions operate very differently (if they work at all) for

    size(A) = (m,1)
and

    size(A) = (m,n≠1)
So there may be 1000 numbers floating in the ether, but conceptually they're not the same. Multiplying a 1000x1 vector by a 1xp vector has a completely different result than multiplying one thousand 1x1 vectors by that same 1xp vector.

Although, only many hours later do I realize that the original submission title might've been wordplay on the phrase "a picture is worth a thousand words", so my brain is not reliable today. I shall refrain from spewing more-likely-than-not incorrect statements concerning linear algebra.


From the Crema wiki that I just glanced at, it is indeed not possible to write infinite loops:

"The only type of loop supported by Crema is the foreach loop. This loop is used to sequentially iterate over each element of a list. Crema specifically does not support other common loops (such as while or for loops) so that Crema programs are forced to operate in sub-Turing Complete space. In other words, foreach loops always have a defined upper bound on the number of iterations possible during the loop's execution. This makes it impossible to write a loop that could execute for an arbitrary length of time."


This is a fantastic explanation. Thank you for phrasing it so concisely.


Sounds like it could work. With any supervised learning algorithm, the key is to have good labeled data that accurately captures the function you want to learn (i.e. something that exposes errors + their causes and supplies the proper sysops fix).


They are actually a public company as you can buy stock in them.


Nope.


Do you really believe that it'll ever be on the JVM? Why would Microsoft put any effort into that? CLR is now cross platform. If you want functional programming that is similar in style to F# (and ML-family of languages), check out Scala.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: