I'm assuming you're talking about Platform / BSP side of the ugliness.
You have to look at the lineage of how things evolved to the current state - on the PC ecosystem, everything is already on enumerable buses (PCI, USB, etc) with standards (PCI, UEFI, etc) describing how it's all supposed to find what device is connected where and have it all work together. The incremental cost of opening that up to the public is thus fairly small since you need to build your platform to adhere to the standards that are already there anyway. That's how you get to being able to boot a kernel image on a random system or insmod a random driver you found and expect it to (mostly) work.
In the SBC/Embedded ecosystem, there really aren't any standards. Since internal topology of each SoC is different and the pace of new SoC releases is so high, there's no time for standardization - you throw in random IPs from bunch of different vendors, figure out how to connect it all together, and get it to the market. In this scenario, having something documented is actually a negative thing - once something is documented, people expect it to work the same way going forward. You can hide a lot of hardware deficiencies in binary blobs, something that's very difficult to give up. Thus, there's a huge disincentive to provide full hardware documentation. I'd imagine that in some cases, for licensed IPs, the SoC vendors may not even be allowed to do so even if they wanted to.
Things like DeviceTree is trying to nibble around the edges of this problem, but given the current state of things, it'll be a while yet as a lot of the building pieces doesn't even seem like it's in the picture yet.
The lineage is hardly an excuse - PC started out just like ARM is, with manual IRQ assignment, no hw detection, and the like. But this was solved in the mid-90s with plug and play standards. It's really sad that in 2015, embedded SoCs still don't have anything comparable.
> Dynamic module loading means that it would only increase disk usage.
Disk? You mean, mass storage?
Hardware detection is definitely bloat on this kind of system, because there isn't much variety to begin with.
PnP was created to let people less ans less RTFM-inclined to install new hardware on an platform that won its market share because of its extensibility.
SOCs and SBCs OTOH are generally used in closed embedded systems that are often not at all designed for evolution. Using auto-detection would be a waste of resources and potentially cause problems.
On the grid tied solar inverter front - the spec sheet says the battery voltage is 350~450 V, so we're looking at 108 lithium ion cells (25 x 18650 cells?) at 400 VDC nominal. This is quite different from typical lead acid battery pack voltage of 12, 24, or 48VDC that's used for battery backup storage, so a lot of existing solar battery storage infrastructure may not even work... this means that people may need to buy a whole new set of supporting hardware to integrate this into the existing solar systems instead of being able to update the software on existing hardware.
Lithium Ion also has quite a different (and much less forgiving!) charging cycle that requires much more monitoring of things like temperature, though I'd imagine a lot of that would be built-in as a safety mechanism directly into the Powerwall.
What I've heard is that National Electrical Code becomes much more stringent on battery systems greater than 48V, with the line drawn at 48V due to it being used widely in the phone landline system. I'm not sure how true that story is, but I'd imagine extra care is probably warranted. 10kWh is about 9kg of TNT. :-)
Yes talk to telco Engineers from the power side of the biz and they will have war stories about accidents and close misses with the Central Offices (Exchanges) DC power.
High Voltage/Amperage DC is quiet different to AC power and i suspect that building/wiring codes are going to need to be updated if this local storage takes off.
dropped a wrench across the terminals on the deep cycle backup batteries inside a telco switching center. Had to disconnect the whole bank of batteries to fix it because the wrench welded itself to the terminals. it wasn't a tack weld either.
Solar systems are generally HVDC off the panel. I think in practice you'd tie the three sources (grid, panels, battery) together in the AC domain with a computer monitoring consumption and generation then signalling the battery system to charge or discharge.
Higher voltage means lower current and thus lower voltage loss, and lower percentage losses for the same voltage loss, so you don’t need lots of copper.
Yes, but you're throwing it into an inverter, and it's not like you're carrying it too far. High voltages also present problems in itself (isolation, etc)
Now, on a second thought, maybe it's more efficient to do 400VDC to 120VAC/240VAC (don't know the exact values)
48V is under the threshold at which electrical shocks become dangerous. 48V or less is "low voltage" class. And, 48V is a multiple of both 12V and 1.5V (most individual battery cells are 1.5V), making it much easier to hook up by using a combo of parallel and series connections.
Transponders have to be turned off while you're at the airport and while the plane's being serviced on ground. I think 777 has 2 transponders, so you also need the ability switch from one to the other. Lastly, I believe all electronics on the plane must be on a breaker circuit in case of electrical shorts so that pilots can isolate and turn off faulty circuits.
Although, I guess you could just check if the landing gear is up. I'm sure there are corner cases I can't think of, but it seems like that would pretty much cover it.
While this makes sense it may have to do what the poster above you said
Lastly, I believe all electronics on the plane must be on a breaker circuit in case of electrical shorts so that pilots :can isolate and turn off faulty circuits.
The difference between DCH (the full-on high power state) and PCH (the lowest power, waiting-for-paging state) is about 2-orders of magnitude. Last time I measured, it was about ~100mA vs ~1mA (at ~3.7V) used by the radio. So, it's not couple of minutes of battery life, but rather _many_ hours of standby life instead. People usually aren't very happy when a fully charged phone dies overnight.
I've seen apps do some crazy things and it really has a significant effect in over-all battery life. A popular Android weather clock widget woke the phone up every minute to update the minute number on the graphics and updated the weather information every ~15 minutes (gps + radio!) which single handily crushed the standby battery life from multiple days to less than 8 hours.
Yes, I don't think _every_ decision should be based on minimizing the wake ups... but on the other hand, all developers should at least try to have as much understanding of the platforms that they're working on so that they know what trade-offs they're making with each feature they're adding.
I'm glad that Google has these videos available and that they're being picked up in places like HN.
Anecdata: I turn my phone's wifi off when not required because it has a significant effect on the battery life - particularly when I'm not in range of a wifi signal. I've had a full charge die on an 8-hour country drive because wifi was on. On the return trip, wifi was turned off and it behaved as expected.
Go for a long walk in the park? All that time your phone's wifi is straining to find a wifi signal. It's not as big a power sap as the screen, however it's constantly, silently in use and the screen isn't.
Wifi scanning is also pretty terrible for power consumption as you found out.
If you are on Android, my guess is that your phone was set to be connected to Wifi during sleep. You can control it by going to Wifi networks section -> menu -> Advanced -> Keep Wi-Fi on during sleep, then setting it to "Only when plugged in" or "Never".
If you're driving around with the phone set to leave the Wifi on during sleep, it'll be constantly going in and out of range of various Wifi connections. If the Wifi part is set for passive scan, then the processor needs to be kept up as the Wifi chip attempts to collect the BSSID of all the network it see and processor tries to figure out whether these networks have been seen or not. If it's set for active scan, then the Wifi needs to power up the radios to transmit beacons. Pretty bad news in either case, unfortunately.
I personally setup Llama to turn the wifi when I'm not connected to a tower that I previously defined as "wifi available". It's nifty because I never have to explicitly turn wifi off then back on during my travels. To add a new wifi zone, I simply add the cell tower to the previous list and let the app do the rest.
Llama by itself uses less battery than leaving the wifi chip on, even in passive scanning mode, as the phone is always looking and checking cell towers.
What the hell? The M7 is a marketing label for some COTS silicon they licensed from some IP company somewhere (probably ARM/Broadcom or one of their associates).
It doesn't make unicorns shit rainbows or defy the laws of physics.
Your comment isn't only disrespectful, but also misses the point. It doesn't matter if the M7 is a custom design or bought off the shelf. As long as it needs less power than the application processor to sample the sensors at some frequency, it fulfills its purpose.
The problem is that once you get down to ~$30k pure electric car, you're going to end up something that looks like the Nissan Leaf, even 5 years from now. There is no magical Tesla dust that allows them to circumvent the laws of physics and economics.
The problem is in the batteries - Tesla Model S batteries come in 60 and 85 kWhr capacities. Using the most generous specific energy estimates and the price estimates (265 Whr/kg and 2.5 Whr/US$ - http://en.wikipedia.org/wiki/Lithium-ion_battery), you're looking at 500 to 700 lb of extra weight and $24k~$32k due to batteries alone. Even if the price halves in 5 years, you can't make money spending up to half the price of the car in batteries alone.
As a premium car, Tesla can charge the extra money required for the large battery pack and things like all aluminum chassis in Model S. At the lowered price point, the revenue just is not there to justify these things.
This basically means that to hit the $30k price point, you're going to end up with a much smaller battery pack (like Leaf's 24kWhr battery pack) and much smaller car so that they can hit the (lowered) performance target in both the driving characteristics (acceleration, top speed, etc which depend greatly on curb weight) and range (weight and battery capacity).
If you look at the engineering trade-offs required to get to $30k pure electric car in 5 year timeframe, it's hard to imagine something drastically different from Nissan Leaf in range, size, and driving characteristics.
I think what you're saying is unnecessarily pessimistic. Lots of things evolve quickly in a period of 5 years in technology. If Tesla claims it will develop a $30k consumer car, I'd say they probably know what they're talking about, and they'll probably build it with quality. I mean, what's the point of going this far, if Musk just wants to piss away all of his credibility by making a crappy $30k electric car? If you want to guesstimate costs and hack them together, we can do this all day, but I'd say the only way to tell is to wait and see.
TL;DR - it's hard to make claims to the quality the next Tesla car without having any idea of what they're going to do.
I worked on electric vehicle space in early 2000s - there are many improvements for sure since then, but in the battery technology, not much has changed. 10 years ago, top of the line 18650 Li-Ion cell (same ones used in laptops and the Model S) weighed 46g, had capacities of 2.6Ahr, and cost about $10 a pop. Now, it weighs the same but have capacities of about 3.4Ahr and cost about $5. 30% improvement and half the cost is nothing to laugh at, but that also took 10 whole years!
The biggest problem with batteries is that they're chemistry-bound - you don't get the free twice-every-2-years type of thing that we're used to in computing world.
Even with the Nissan Leaf type of vehicle, the growth in battery capacity and more efficient / lighter chassis may result in extension of range to, say, 150 miles from 100 miles by 2018. Will that make it a no-compromise electric car? What would the no-compromise range be?
Looking at what Tesla has done, and what Elon has said (who actually very carefully said "sort of affordable" - http://greenenergyholding.blogspot.com/2013/08/teslas-next-e...) what's more likely is a new model starting at, say, $40k ($30k after tax credits) with fairly limited range, with really usable range starting at around $50k. Is that affordable? Probably not. But probably does fit the label of "sort of affordable".
A $30k Tesla doesn't have to differ drastically in range, size, and driving characteristics over the Leaf. As long as they make something that doesn't look ghastly, it's already got an advantage over the Nissan.
They don't need to profit directly on the low end cars, they're looking at increasing the number of Tesla owners, which will help to push pro-Tesla agenda and sell charging stations or other Tesla technologies.
CPU is rather a small power of total platform power consumption these days. Switching x86 to ARM won't magically get you from 5 to 50 hours of active usage. If you compare MacBook Air and the new iPad, you'll see that the difference is actually rather small:
MacBook Air 11.6 - 35 WHr battery and 5 hours of Wifi Usage at 7 W [1]
New iPad - 42.5 WHr battery and 10 hours of Wifi Usage at 4.25 W [2]
e-ink screen may make that last far longer... but if you've spent any time trying to use the e-ink Kindle browser, you'll quickly realize that the lack of fast refresh rate is a very difficult problem to solve in terms of usable UI.
As it is a netbook, I assume that the user puts much less demand on a GPU than he would on an Apple device. I was also thinking of using it on the road -- so no wifi.
I think there will be performance gains to be had, especially in applications where concurrency has been 'bolted on' (akin to the Big Kernel Lock).
However, I think what makes this interesting is not the raw performance it provides, but the functionality that it exposes. As far as I can tell, TSX will allow sets of operations to be executed, then "rolled back" in case of conflicts. This could greatly improve performance of Java code within synchronized blocks, for instance, or provide much faster hardware implementation of the software transaction memory model in Clojure.
I believe the biggest benefit of this will be making multi-threaded programming easier to get right, and get decent performance to boot. And if these constructs are supported natively in languages and frameworks, everyone will benefit from having 4- 8- or 16- cores.
"Apple typically asks suppliers to specify how much every part costs, how many workers are needed and the size of their salaries. Executives want to know every financial detail. Afterward, Apple calculates how much it will pay for a part. Most suppliers are allowed only the slimmest of profits."
"Many major technology companies have worked with factories where conditions are troubling. However, independent monitors and suppliers say some act differently. Executives at multiple suppliers, in interviews, said that Hewlett-Packard and others allowed them slightly more profits and other allowances if they were used to improve worker conditions."
The article clearly explains that Apple management does care, but that their drive for secrecy and control makes it much more difficult for them to affect change. Maybe he typed it on an HP computer instead.
Leakage power is actually very significant part of the total power usage [1] and one of the bigger reasons why Intel developed the tri-gate technology [2].
Active power is the one that's related to the frequency (P ~= CV^2f). Leakage power will "leak" even if the transistor is not switching.
Not sure what you mean by significant, but typical leakage power numbers are something like 15-30% of total power.
Maybe you're referring to some papers that used to come out a few years ago which suggested that leakage power will dominate total power. As I said above, this is unlikely to happen. It doesn't make sense to operate at a combination of supply voltage (Vdd) and threshold voltage (Vt) where leakage dominates total power. I think these papers misunderstood the fact that threshold voltage and hence leakage itself is a knob that the device manufacturing folks can control.
Active power is the one that's related to the frequency (P ~= CV^2f). Leakage power will "leak" even if the transistor is not switching.
If you're implying that leakage power doesn't affect frequency, you are wrong. Transistor speed depends on the gate overdrive which, for modern velocity-saturated devices is proportional to Vdd-Vt. Leakage power itself is proportional to exp(-Vt). There is a clear trade-off here between how fast you run your chip and how much it will leak.
The papers I've seen point to values larger than 15~30% - I've seen ~50% cited for geometries as large as 65nm, only to get worse as we go to even smaller feature sizes. [1]
Threshold voltage is not really an effective knob, unless you assume that the feature size to be a knob and go against Moore's law, or that brand new, once in 10-years process innovation is a knob that designers can pick out of a hat. I don't think anyone's clamoring for return to 130nm parts on a smartphone. At each new process node, you're going to lose out on the amount of control you'll have over Vth.
This is basically what Intel did with the tri-gate transistors which gives them longer lease on life until they bump against subthreshold leakage. TSMC is on their first generation high-k metal gates, and still a process node or two away before jumping over to the tri-gate party.
If you're referring to this graph [1], that comes from an ITRS prediction. These predictions seem to be made assuming that we'll scale feature sizes assuming everything else will stay the same, which of course, is never the case. I wouldn't read too much into them. BTW, ITRS is famous for making ridiculous predictions like we'll using 15GHz by 2011.
Why is it not an effective knob? Most modern designs include sleep transistors in an attempt to not leak when a circuit is inactive. These would not work unless we could engineer high-vt transistors.
You're going to be I/O bound (network or disk), memory bound, or compute bound. It's hard to imagine the Redstone systems besting Xeon based servers in any of the three.
It depends entirely on where your bottlenecks are. If the bottleneck is entirely within your node, then this isn't going to be compelling. If you're doing something that's very light on the resources within your node (serving static content, etc) and your bottleneck is some other system somewhere else, then these sorts of machines could be compelling purely from a space/power POV.
If your nodes are not bound on some local resource, you can as well just run them in virtualization containers on Xeon. The setup will be even more flexible than with (less powerful) ARMs.
If your workload runs on one or two Xeon servers, it probably isn't worth considering something like this. If your workload runs on racks of Xeon servers, it might be.
Then the question is, which hardware delivers the right balance of CPU, memory and IO bandwidth for the lowest capital and operating costs.
Also for what it is worth, each card has 60Gbps of general IO bandwidth, and another 48Gbps of SATA disk bandwidth.
You have to look at the lineage of how things evolved to the current state - on the PC ecosystem, everything is already on enumerable buses (PCI, USB, etc) with standards (PCI, UEFI, etc) describing how it's all supposed to find what device is connected where and have it all work together. The incremental cost of opening that up to the public is thus fairly small since you need to build your platform to adhere to the standards that are already there anyway. That's how you get to being able to boot a kernel image on a random system or insmod a random driver you found and expect it to (mostly) work.
In the SBC/Embedded ecosystem, there really aren't any standards. Since internal topology of each SoC is different and the pace of new SoC releases is so high, there's no time for standardization - you throw in random IPs from bunch of different vendors, figure out how to connect it all together, and get it to the market. In this scenario, having something documented is actually a negative thing - once something is documented, people expect it to work the same way going forward. You can hide a lot of hardware deficiencies in binary blobs, something that's very difficult to give up. Thus, there's a huge disincentive to provide full hardware documentation. I'd imagine that in some cases, for licensed IPs, the SoC vendors may not even be allowed to do so even if they wanted to.
Things like DeviceTree is trying to nibble around the edges of this problem, but given the current state of things, it'll be a while yet as a lot of the building pieces doesn't even seem like it's in the picture yet.