Lots of froth, lots of buzz, lots of hype. Few facts, few examples. It's actually hard to comment on, as there's so little to get hold of. Standard Wolfram fare, rally (still waiting for CAs to take over the world), but always disappointing from someone evidently so smart.
Oh, and if Mathematica is the basis of the "Wolfram language", and this is the universal computing language of our new "interconnected brain", I'm leaving for another universe. Unless the boy who cried wolf really has cornered one this time.
What Wolfram says is nonsense but what he produces is still good.
Wolfram Alpha was massively over-hyped and billed as a Google-killer. It is not remotely that, but it's still an incredibly useful and inimitable product.
But I do grant him the enthusiasm of the forever hacker. I read the article and it came through as someone that had a breakthrough and just wants to talk about it.
>it came through as someone that had a breakthrough and just wants to talk about it.
Yeah, it did. It also reminded me of one of my own "Moments of Absolute Clarity" that tend to unravel considerably when met with the hard test of execution in the real world.
There's something about the vagueness of the promise that makes it so reminiscent. Sometimes you can get the sense of being on the verge of something big--but the gnarly difficulty lies in bringing it down from the abstraction.
So does your stoned roommate who just finished making whirlpools in a half-full bottle. That's about as much of a filter as Wolfram seems to apply to his own ideas these days.
Yep, I don't get why Wolfram language would replace APIs we already have in place.
Because it's in natural language? WolframAlpha demonstrates (unsurprisingly) that it's still finicky. You still have to do just as much work to ensure connected services are working together properly.
Because--according to universal computation--it covers the entire computable world? So does any Turing-complete language.
I do see this as another cool thing along the lines of IPython Notebooks or JS Fiddle where you can quickly hook up to services and share the results. Uniquely, WolframAlpha's datasets and some of Mathematica's features. So it'd be nice for homework sets or Bret Victor-esque reactive documents (see http://worrydream.com/Tangle/).
Hmhmhmhm... I'm working with the people who are building these cloud systems, so let me elaborate a little:
No, the Wolfram Language is not natural language. It's the LISP-like language behind Mathematica. We needed to do that from a branding perspective so that the Mathematica product can continue to exist for the academic market without being conflated with the underlying language, which has much wider aspirations.
But as for natural language, you can press '=' and go into 'natural language mode' and write stuff like "total the list", but in my opinion it isn't very good yet and is only really useful for absolute beginners. I think it could get much better in the future when we have nice sophisticated type inference going (which I am working on right now).
As for IPython: the In[..] and Out[..] lines you see in IPython (and amusingly some other cloud system-based IDEs now) mirror v1 of Mathematica back in 1987 (I believe deliberately). It's an amusing accident of syntax that evaluating In[1] works in both Python and Mathematica.
But yes, exactly, part of this whole story is an online IDE (actually, a set of them) that makes it extremely easy to get a whole system deployed. Imagine setting up some machine learning, creating some slick visualizations, allocating some persistent storage, putting it behind an API, and creating an embedded dashboard, all in the space of 20 minutes and a few dozens of lines of code.
The closest existing competitor is FP complete's cloud Haskell system, but I'd love to know about others.
Ah, so it actually is Lisp. I immediately though Wolfram had a Lisp epiphany when he said "the idea of symbolic programming, and the idea of representing everything as a symbolic expression". This is basically the idea McCarthy had 50 years ago. However, I always had in mind that Mathematica features a more Python like language and did not know that there is a Lisp inside.
Sooooort of. There are major differences, differences that make a difference, so to speak.
Lisp: everything is lists.
Wolfram Language (prior to V10): everything is an expression. An expression has a head, and parts. The head is the primary place you attach rules. The head can be List, but can also be, say, If, or Disk, or Entity, or Timeseries, or Image, or Graph, or Graphics, or Button, or Frame, (and on and on and on).
Wolfram language (v10): expressions can be numerically indexed (i.e. Part[{"A","B","C"}, 2] == "B"), or symbolically indexed (Part[<|"A" -> 1, "B" -> 2, "C" -> 3|>, "B"] == 2). This new datastructure is called an Association (analogous to a hash map / associative array / dictionary, of course), but eventually it could have heads other than Association.
Anyway, its all quite uniform. No pointers, no references, nothing you "can't see". And the new Association data structure interacts beautifully with lists when you allow it to interact with Part -- you end up with something like XPath, but capable of expressing, for example, almost all of SQL, or LINQ, but in a very functional way.
Bombastic, long winded and narcissistic. On target for Stephen Wolfram.
But, if you've used mathematica it is the closest thing we have right now to the star trek computer. In a single line I can get solutions to complex problems that would take days in Ruby, Lisp or Haskell. It is the same distance again as Lisp is from C.
In fact, many of the failures you see write-ups on HN I've been able to model and solve in a few minutes with MMA. In particular the rap genius Heroku queue issue.
In a single line I can get solutions to complex problems that would take days in Ruby, Lisp or Haskell. It is the same distance again as Lisp is from C.
In fact, many of the failures you see write-ups on HN I've been able to model and solve in a few minutes with MMA. In particular the rap genius Heroku queue issue.
Can you elaborate and/or provide a thorough example of this? I'm curious.
Sure. During the rap genius debacle there were a bunch of people taking time to figure out what was wrong with Heroku's routing. I was entertained because there were two parts to the problem A) queueing and B) optimizations.
So with a line of code you could represent heroku's queues and determine exactly how many dyne's you needed. Or, you could try different queueing methods and determine how much money you could have been saving.
In some sense Mathematica is further down the What vs How line of language power. Even in ruby, Clojure or Haskell you're still left specifying HOW to do optimization, integration, etc. Not what you'd like to optimize, integrate or manipulate.
Well, here's a quick simulation assuming request servicing follows a Poisson process. The Manipulate function takes a function and generates an interactive version of that function. So in this case it has a graph (list plot) of the simulation, along with a slider to manipulate the number of new requests (arrivals).
If you wanted to be ambitious, you could simulate different servicing distributions, and times. But that might add a whole 2-3 lines of code to the solution.
So how does generating a graph actually solve the problem in the real world? Or am I misunderstanding what the problem is? I was assuming the problem was with some real world application, most likely regarding the implementation of some complex networking algorithm for Heroku's system. But if it's just some abstract math problem, of course it would make sense to use something like Mathematica, MATLAB, etc.
That was just to demonstrate the problem.
You could solve it by using NSolve[expr, vars] and giving a limit for the response time you'd like. You'd get back the number of dyne's you'd need to meet those requirements.
On heroku's side, you could model their network with a few lines of code and experiment with different types of queues to find things that work well and are affordable.
As PG said in beating the averages, it's one o those things that's easy to dismiss when you're looking up the power curve.
This presentation [0] does a good job of showing what's special about the Mathematica language. I don't use it myself (closed source), but it is undeniably cool, powerful, and different. It turns out that Mathematica’s design has a conceptual elegance that makes it capable of a type of interaction that’s nothing short of astonishing in its flexibility and generality.
I have to generally agree here, although my experience has mostly been with Wolfram Alpha. I'm continually amazed at how easily it lets me grapple with optimization problems or solving for different variables (when I barely remember how to do them myself).
However, Mathematica has always felt a bit niche to me. I wish it was more of an open source project like Octave, because I think the approach is awesome but too often I feel constrained by Wolfram's way of looking at things. If I could combine Matlab, Go, Python, Mathematica, Excel and (yikes) php for it's hands-on-get-@#!$-done-facility, that would be my ideal language.
An entertaining read, but as an enthusiast of NKS-style science (and ohmygod bias! someone at Wolfram Research) I don't think the review is all that intellectually substantive: http://news.ycombinator.com/item?id=3974503
“I happen to think that most of it is written in simple, straightforward, unpretentious language”
Hahahaha. That’s the funniest thing I’ve read someone write about Wolfram.
The comparison to Darwin is also pretty amusing. Darwin’s book is amazing because it spends most of its effort showing us evidence, so we can draw our own conclusions guided by an overwhelming pile of careful data and research, plus a bit of sharp analysis. By contrast, Wolfram’s book spends most of its effort telling us how important it (and he) is, on unremarkable and tediously repetitive evidence/analysis.
edit: note the Oxford American Dictionary’s definition of pretentious: “attempting to impress by affecting greater importance, talent, culture, etc., than is actually possessed”.
For example count the number of things named “Wolfram” in this most recent blog post, or count the hyperbolic adjectives and adverbs: “amazing”, “whole different level”, “profoundly important”, “incredibly useful”, “breathtaking”, “exciting”, “spectacular”, “sophisticated”, “universally accessible”, “new kind of language”, “cover[ing] all forms of computation”, “by far the largest ever”, “immense”, “spectacularly productive”, “remarkable”, “immense”, “completely general and uniform”, “immense”, can’t-do-it-justice, “immediately meaningful”, “absolutely practical, and spectacular”, “amazing”, “never imagined before”, “incredibly fertile”, “disorienting”, uniquely converging, “universal”, incredibly powerful, “instant”, “absurdly”, “seamless”, “dauntingly long”, “widely accessible”, “wonderful”, “exciting”, “with full semantic fidelity”, “instantly programmable”, “a kind of global brain”, “convenient”, “efficient”, “a new level of computation”, “our most important technology project yet”, “incredibly exciting”.
If you took out all the adjectives, and all the instances of “Wolfram”, there’d be pretty much nothing left. ;) [NKS is thankfully not quite this bad on a sentence-by-sentence level. But then it has 1000 pages to repeatedly tell us how revolutionary and brilliant it is.]
Objectively, it avoids jargon and florid prose (apart from some rather repetitive language). It means a child could read it, and perhaps that makes it boring to read.
Perhaps you found the actual experiments he did boring, or tedious. I think they're pretty cool, and I've now done my fair share of them, and taught other people how to do them. The axiom systems, graph automata, constraint satisfaction, causal graph stuff, and entropy stuff is all cool. And the notes are a goldmine reference.
Also, this is probably the least important of the 8 specific claims of Shalizi's that I rebut. But hey.
edit: because we seem to be using edits to converse. For 25 years, Wolfram has been the company brand, and people roughly know what it is -- how can we NOT prefix our products with the word Wolfram? Also, did you just call a marketing-type pre-announcement hyperbolic? http://www.youtube.com/watch?v=BETSuT2RNLs
It'll be neat when Wolfram uses their (his?) almighty genius to utterly change the world of computing. It'll also be neat when they implement basic undo/redo functionality in Mathematica.
Srsly. It is unbelievable that they didn't fix this in Version 9. In many ways MMA is incredibly sophisticated software, but it lacks this most obvious of features.
You can do awesome stuff with their mathematical based tools. There are tools for statistics, conversions between units, calculation with dates & times, and a lot more. Look for yourself at http://www.wolframalpha.com/examples/
Now, they also have the data. Like weather data, about media(movies/music), about languages, about media, about stock and also a lot more.
But for developers there is a problem, you can't build apps with their platform. For example you can't really store or receive data, and there are more practical problems.
So I think that they already solved the practical problems. Because they are developing software themselves this way. So for making it friendly for developers they only have to build some frontend for it.
Excuse the sarcasm but Mathematica is unjustifiably expensive to be general purpose and ubiquitous and I'm sure any derivative or superior tool will be as well.
I'm a Wolfram science summer school instructor, so I'm in a position to see the kinds of trends there are towards NKS-style methods. It's slow, but definitely there. Some of it is directly inspired, some of it is 'convergent evolution'.
Here's a very high-level and eclectic list of themes and specific research directions I can remember off-hand: agent-based modeling in economics and operational research, game theory automata in evolutionary biology, lattice gas methods in fluid dynamics, tessellation-based approaches to solving PDAs, L-systems in architectural design, logic automata for programming array-based computers, cellular automata-based PRNGs for stream ciphers, program search for finding lock-free concurrent algorithms, rBMs as used in deep learning.
In fact, I used an exhaustive NKS-style search the other day to find novel data query primitives (i.e. what other functions live in the type-signature space that MapReduce occupies?).
> cellular automata-based PRNGs for stream ciphers
This is something that kind of died in the late 80s. One of the most noteworthy cases was Damgard's CA-based hash function in the paper that proved collision-resistance of the Merkle-Damgard mode [1]. It was quickly broken [2]. The SHA-3 winner, Keccak, has roots in CA-based designs [3], but at this point it's little more than an historical footnote.
For 2, I think it was overly ambitious to think that within a program family so small it has only 88 members there would be one ideal hash function. And If you compare the simplicity of Damgard's design to the contortions that modern hash designs employ to stymy cryptanalysis, it's hard to think that more designs constructions wouldn't perform better, even on that particular automaton.
For 3, I think having a full 7 slides of Bertoni et al's slide deck devoted to the influence of CA-based cryptography on SHA-3 makes it fair to call the approach influential.
I was under the impression that lattice gas methods in CFD were more or less obsolete. They're nice in that they're unconditionally stable and only use bitwise operations, but the amount of CPU time you save from this is massively outweighed by the number of simulation runs you need to do to average out the noise. They're also not Galilean invariant (at least not without messing around with rescaling factors). Lattice Boltzmann is much more practical.
Right, but fundamentally Lattice Boltzmann is just an optimization on top of lattice gas automata. It's continuum-ing up from a CA rather than discretizing down from Navier-Stokes.
And the Galilean invariance thing is kinda cool: who knew that you didn't need something as fundamental as Galilean invariance?
You can see it as continuum-ing up from a CA, or you can see it as a discretization of the continuum Boltzmann equation.
What do you mean you don't need Galilean invariance? In the real world, fluid motion is Galilean invariant (unless you're talking relativistic motion, which is a whole other class of thing that lattice gases have trouble with). You might be simulating something but it sure ain't a Navier-Stokes fluid.
Aha! Physical fluids aren't Naiver Stokes, which is just a continuum approximation that was invented before atoms were even a scientifically established idea. Which of course you know already, but it means it is begging the question a little to say "you might be simulating something but it sure ain't a Navier-Stokes fluid".
For Galilean invariance, I've previously waxed philosophic about why I think that's a feature, not a bug (at least, pedagogically): http://news.ycombinator.com/item?id=5931434
They might not be strictly Navier-Stokes but the way in which real fluids deviate from N-S is completely unlike the way in which lattice gas fluids deviate from N-S. Absolutely, you can discretize time and space and get a useful fluid model, and (as Wolfram showed in a very good paper in 1986) you don't even have to worry too much about isotropy, as long as your lattice obeys certain constraints. You do need Galilean invariance, though.
Look, I do get your point about lattice gas fluids being interesting conceptually, and I do think they make an interesting point about how little you can get away with and still yield a useful fluid model at the macro scale, but I don't think they're a good example of a trend towards NKS-style methods. If anything the trend in that field since the late 90s has been away from NKS and towards seeing the lattice Boltzmann method as a solver for continuum treatments at the Boltzmann (rather than N-S) level.
> the way in which real fluids deviate from N-S is completely unlike the way in which lattice gas fluids deviate from N-S
Can you describe more how the deviations deviate? Specifically, how do these differences affect numerical solutions?
> If anything the trend in that field since the late 90s has been away from NKS and towards seeing the lattice Boltzmann method as a solver for continuum treatments at the Boltzmann (rather than N-S) level.
Also, which continuum are you referring to here? Number of particles? Lattice spacing?
Off the top of my head (it's quite a while..), lattice gas models wind up giving you a velocity-dependent viscosity, i.e. the viscosity of the fluid is a function of how fast it is travelling relative to the lattice. This is unphysical to say the least: while I think you can apply some sort of rescaling to alleviate it, it doesn't make it any easier to get to high Reynolds number flows: that and the huge amount of averaging make lattice gas rather impractical compared to later methods.
> Also, which continuum are you referring to here? Number of particles? Lattice spacing?
By continuum I mean you write down the continuum Boltzmann equation, i.e. a partial differential equation for the evolution of the single particle distribution function. You can then discretize this onto a lattice to recover the lattice Boltzmann method.
Lattice Boltzmann have these same weaknesses (whether one cares as you say depends on the Reynolds regime). Point is, starting from the "simplest possible gas program" has yielded a useful branch of CFD, and one would have thought it extremely unlikely to work if you used intuition from traditional mathematics... It's as discrete and analytically intractable as you get.
Anyway, to me, the continued application of these methods is one data point that NKS-like methods are proving useful across a variety of domains.
Err, no it doesn't - the viscosity in LB is a function purely of the relaxation time (for LBGK anyway). Nor do you need to do any simulation repeats to average out the noise. The main weakness of LB is that it's not as numerically stable as a lattice gas method.
I don't know, and it certainly feels kind of silly reading the prologue where it promises to reinvent everything, but I found that book to be fascinating and thought provoking. It didn't change the world but a lot of people are throwing the baby out with the bathwater, there's a lot of good stuff in there. If he had renamed it "Cellular Automata are fun!" or something less bombastic, it would be an instant classic.
Also, I think the key insight is pretty deep -- systems are either trivially simple, or limitlessly complex, once you reach a (very low) level of complexity, you can do pretty much anything.
I wouldn't say Wolfram himself has a large following in the sciences. At least in my area (math and engineering) he is actually widely and heavily criticized. However, some of the things his company makes are widely used. Mathematica is a genuinely wonderful tool (if you have deep pockets) and Alpha is also very popular.
>There are plenty of existing general-purpose computer languages. But their vision is very different—and in a sense much more modest—than the Wolfram Language.
Sounds very promising, Wolfram seems to be on the right track here, pricing might be an issue but I am looking forward to the "language" like the instant cloud deployment.
There is more happening with R, D3 / Javascript info vis, and the iPython/Notebook stack around data science than Wolfram as a singular corporation can possibly keep up with. In a recent version R was embraced and extended, is this another step with those same sorts of strategies?
The value of these Open Source communities go way beyond the core language and arise out of the structure.
This will go the way of Linux/BSD (only Microsoft could justify rolling forward alone with its own OS kernel).
I love hearing Wolfram give talks (I went to his Elements intro at the SF Maker's Fair, right after the iPad came out), and his blog posts; I get sucked in, thinking--- this genius, he thinks of everything!
Then I try to find the "Redo" button in Mathematica and I reconsider.
Unless I misunderstand what you mean by 'term rewriting', fexpr based lisps are similar - everything is a rewrite on the program tree, but 'normal' functions evaluate their arguments before doing things with them, and compile time macros (not reader macros) are effectively a first pass over that tree where any symbol not already bound to a macro is treated as the identity function.
The kernel language and associated $vau calculus go into this more, and I've experimented with it somewhat myself having ported Manuel's wat-js to perl.
> Pattern matching and term rewriting is the fundamental operating principle of Mathematica’s evaluator. All other programming constructs are implemented by way of term rewriting.
Oh, and if Mathematica is the basis of the "Wolfram language", and this is the universal computing language of our new "interconnected brain", I'm leaving for another universe. Unless the boy who cried wolf really has cornered one this time.