Tuesday, November 21, 2017

Autonomous Cars - the Sensor Problem - Part 3

So far in this series, we've looked at the radars being proposed for the task of monitoring traffic.  The radars will judge if cars are in adjacent lanes and the relative velocities of those cars to determine if a potential collision is developing.  For example, the forward looking radar might determine the car ahead has suddenly slowed dramatically and if the car's ADAS doesn't apply brakes, we're going to hit it.  Or it might see if the adjacent lane is unoccupied in case we need to switch lanes.  A side looking radar can also see if a car on a crossing path is approaching.

All of this seems rather straightforward, but consider the radar systems designers in the Tesla incident we started this series with.  Did they not consider that the interstate signs would be big reflectors and that their radar would illuminate them?  Or did the antenna design get compromised while trying to fit it into the cars body?  Remember, Elon Musk tweeted, “Radar tunes out what looks like an overhead road sign to avoid false braking events."  Tuning out a return is not a radar guy's words.  The software either ignored any return over some level, or they took measures to ensure they never got those high levels, like perhaps aiming the antenna down. 

Now let's look at the system that really seems to have the least chance of working properly: artificial vision.  A vision system is most likely going to be used to solve an obvious problem: how does the car know where the lane is?  That's what a lot of early work in autonomous cars focused on and there are times and conditions where that's not at all trivial.  Snow or sand is an obvious concern, but what about when there's road construction and lanes are redirected?  Add a layer of rain, snow or dirt on top of already bad or confusing markings and the accuracy will suffer.  When the paint is gone or its visibility goes away, what does the system do? 

A few weeks ago, Borepatch ran a very illuminating article (if you'll pardon the pun) about the state of AI visual recognition.
The problem is that although neural networks can be taught to be experts at identifying images, having to spoon-feed them millions of examples during training means they don’t generalize particularly well. They tend to be really good at identifying whatever you've shown them previously, and fail at anything in between. 
Switch a few pixels here or there, or add a little noise to what is actually an image of, say, a gray tabby cat, and Google's Tensorflow-powered open-source Inception model will think it’s a bowl of guacamole. This is not a hypothetical example: it's something the MIT students, working together as an independent team dubbed LabSix, claim they have achieved.
This was a recent news piece in the Register (UK).  In the mid-80s, I took a senior level Physical Optics class which included topics in Spatial Filtering as well as raytracing-level optics.  The professor said (as best I can quote 30+ years later), “you can choke a mainframe trying to get it to recognize a stool, but you always find if you show it a new image that's not quite like the old ones it gets the answer wrong.  It might see a stool at a different angle and say it's a dog.  Dogs never make that mistake”.  Borepatch phrased the same idea this way: “AI does poorly at something that every small child excels at: identifying images.  Even newborn babies can recognize that a face is a face and a book is not a face.”  Now consider how many generations of processing power have passed between my optics class and the test Borepatch described, and it just seems that the problem hasn't really been solved, yet.  (Obvious jokes about the dog humping the stool left out to save time).

Borrowing yet another quote on AI from Borepatch
So why such slow progress, for such a long time?  The short answer is that this problem is really, really hard.  A more subtle answer is that we really don't understand what intelligence is (at least being able to define it with specificity), and so that makes it really hard to program.
That's my argument.  We don't know how our brains work in many details - pattern or object recognition is just the big example that's relevant here.  A human chess master looks at a board and recognizes patterns that they respond to.  IBM's Watson just analyzes every possible move through brute force number crunching.  The chess master doesn't play that way.  One reason AI wins at chess or Go is that they play games differently than people do, and the people the AI systems are playing against are used to playing against other people. 

We don't know what sort of system the Tesla had, whether it was photosensors or real image capture and real image analysis capability, but it seems to be the latter based on Musk saying the CMOS image sensor was seeing “the white side of the tractor trailer against a brightly lit sky”.  The sun got in its eye?  The contrast was too low for the software to work?  It matters.  In an article in the Register (UK), Google talked about problems their systems had in two million miles of trials: things like traffic lights washed out by the sun (we've all had that problem), traffic lights obscured by large vehicles (ditto), hipster cyclists, four way stops, and other situations that we all face while driving.

A synthetic vision system might be put to good use seeing if the car in front of it hit the brakes.  A better approach might be for cars to all have something like a MAC (EIU-48) address and communicate to all nearby cars that vehicle number 00:80:c8:e8:4b:8e has applied brakes and is decelerating at X ft/sec^2.  That makes every car have software that's tracking every MAC address it can hear and determine how much of a threat every car is. 

A very obvious need for artificial vision in a car is recognizing signs.  Not just street signs, and stop signs, but informational signs like "construction ahead", "right lane ends" and other things critical to safe operation.  It turns out Borepatch even wrote about this topic.  Quoting that article from the Register, a confession that getting little things right that people do every day is still overwhelming the Google self-driving cars 
You can teach a computer what an under-construction sign looks like so that when it sees one, it knows to drive around someone digging a hole in the road. But what happens when there is no sign, and one of the workers is directing traffic with their hands? What happens when a cop waves the car on to continue, or to slow down and stop? You'll have to train the car for that scenario.

What happens when the computer sees a ball bouncing across a street – will it anticipate a child suddenly stepping out of nowhere and chasing after their toy into oncoming traffic? Only if you teach it.
It's impossible to teach ethics to a computer.  It's impossible to teach the computer "if a child runs out after that ball, slam on the brakes, and if you can't stop, hit something like a parked car".  A computer isn't going to understand the concept of "child" or "person".  Good luck with concepts like "do I hit the adult on the bike or injure my passengers by hitting the parked bus". 

But that's a question for another day.  Given the holiday, let's pencil in Friday. 
Question for the ADAS: now what? 


Monday, November 20, 2017

A World of Absurdities - Economic, That Is

One of the economics sites I read regularly is Mauldin Economics, by John Mauldin.  He was recommended to me first by a reader here, and my apologies for not remembering who you are.

John has been preparing for a big conference in Switzerland and this week's email, Bonfire of the Absurdities, is a summary of what he's presenting.  I really recommend you Read The Whole Thing.  As usual, I'm going run a few snippets to whet your appetite.  Mauldin looks at a handful of economic indicators and more or less echoes my observation: there isn't a week that goes by that something doesn't happen to make me say, "the whole world has gone completely FN".  He's a little more polite.

Let's start with a graph a lot of you have already seen: the Federal Reserve bank's assets as a percentage of GDP.
Things went a little wonky there, somewhere around 2008, no?  Over to Mauldin:
Not to put too fine a point on it, but this is bonkers. I understand that we were caught up in an unprecedented crisis back then, and I actually think QE1 was a reasonable and rational response; but QEs 2 and 3 were simply the Fed trying to manipulate the market. The Keynesian Fed economists who were dismissive of Reagan’s trickle-down theory still don’t appear to see the irony in the fact that they applied trickle-down monetary policy in the hope that by giving a boost to asset prices they would create wealth that would trickle down to the bottom 50% of the US population or to Main Street. It didn’t.
In other words, the Fed is as good at seeing the irony of what they do as Antifa.  The really absurd point here is that the Federal Reserve's assets are under 30% of GDP.  The European Central Bank and the Bank of Japan have both grown their balance sheets more than the US has. The Bank of Japan’s balance sheet is almost five times larger in proportion to GDP, and it's still growing.

As long as he's in Switzerland, he needs to show a little of their absurdities, too. The Swiss National Bank (SNB) is now the world’s largest hedge fund.
The SNB owns about $80 billion in US stocks today (June, 2017) and a guesstimated $20 billion or so in European stocks (this guess comes from my friend Grant Williams, so I will go with it). 

They have bought roughly $17 billion worth of US stocks so far this year. And they have no formula; they are just trying to manage their currency.

Think about this for a moment: They have about $10,000 in US stocks on their books for every man, woman, and child in Switzerland, not to mention who knows how much in other assorted assets, all in the effort to keep a lid on what is still one of the most expensive currencies in the world.
And they're barely doing it.  If people deposit money in Swiss bonds, they don't earn yield, they pay for the privilege of losing money in Switzerland!  Switzerland is fighting a monstrous battle to keep their currency from going up.  Yet, that's still not the most absurd thing here.
Not coincidentally, European yields are at rock bottom, or actually below that, in negative territory. And what is even more absurd, European high-yield bonds, which in theory should carry much higher rates than US Treasury bonds, actually yield below them. Here’s a chart from old friend Tony Sagami:
Interest rates are supposed to reflect risk. The greater the risk of default, the higher the rate, right? Yet here we see that European small-cap businesses are borrowing more cheaply than the world’s foremost nuclear-armed government can. That, my friends, is absurd.

Understand, the ECB is buying almost every major bond it can justify under its rules, which leaves “smaller” investors fewer choices, so they move to high-yield (junk), driving yields down. Ugh.
The common name for high-yield bonds is "junk bonds", because they have a high risk of default.  Here we find that European junk bonds, which (again) should have the highest yield, are earning less than US Treasuries.  (It doesn't say which term US Treasury, and there are many.  Sorry.)  Does this mean buyers think of the US as junk bonds?  Or do they not make the association and just go where they can get any yield? 

Let me leave you with one other plot to get a feel for the absurdity.  This is the total US stock market cap to GDP.  It is now the second highest - at least in this 46 year plot - second only to the dot com bubble of the late 90s and much higher than the bubble that popped in '08.  Really, one good rally, an optimistic "we love 2017!" run up, could put us at the same levels as the dot com peak or beyond.  I wonder how that's going to work out.     



There's plenty of absurdity left, and lots of stuff to make you go "hmmm".  Go read.


Sunday, November 19, 2017

Autonomous Cars - the Sensor Problem - Part 2

The first part of this look at the problems with these systems talked about a handful of radar systems that are likely to be on every car.  These are being proposed to work at millimeter-length wavelengths, frequencies very high in the microwave spectrum.  TI proposed 76 to 81 GHz, but think of them as someone offering a solution, rather than a consensus of system designers.

Let's take a look at radar systems, starting with the basics.

Radar is an acronym that has turned into a word: RAdio Detection And Ranging.  Radio waves are emitted by a transmitter, travel some distance, and are reflected back to the receiver, which is generally co-located with the transmitter (there are systems where they can be widely separated - bistatic radars).  Their signals can be any radio frequency, but higher frequencies (microwaves and higher) are favored because as frequency goes up, size resolution - the ability to accurately sense the size of something - gets progressively finer. If you're making air defense radars, it's important to know if you're seeing one aircraft or squadron flying in tight formation.  Higher frequencies help. 

What can we say about systems like TI is proposing?  A wavelength at 78 GHz is 3.84mm, 0.151" long.  The systems will be able to sense features 1/2 to 1/4 of that wavelength in size, and distinguish things as distinct that are only about 8/100" apart.  That simply isn't needed to look for nearby cars, pedestrians, or even small animals in the road.  If you're looking for kids on bikes, you don't need to resolve ants on the sidewalk.  On the other hand, these frequency bands are lightly used or unused, containing lots of available room for new systems. Which they'll need.

The other thing to know about radar is that since it's a radio wave, it travels at the speed of light, like anything in the electromagnetic spectrum including visible light. This means that for ADAS uses, a radar system is going to need to transmit and receive very fast.  The speed of light is roughly 186,000 miles/second; expressed in inches that's 11.8 billion inches/second.  Stated another way, light travels 11.8 inches in one nanosecond.  For our purposes, we can say light travels one foot per nanosecond in air.  Ordinary radars, whether tactical radars or weather radars, are intended to operate over miles; these vehicle systems won't operate over more than 10 or 20 feet, with the exception of something looking forward for the next car, which needs to work over hundreds of yards.  Radar system designers often talk about a "radar mile", the time it takes for a radar signal to go out one mile and bounce back to the receiver.  (A statute radar mile is 10.8 microseconds.)  We don't care about miles, we care about "radar feet". 

A car in the next lane won't be more than 20 feet away, giving some room for uncertainty in the lane position, so it doesn't seem like a system needing to look a lane or two over would care about returns from more than 40 feet away.  In "radar time" that's (40 feet out and 40 feet back) 80 feet at 1 ft/nsec, so the time from transmit to receive is 80 nsec.  A system could put out a pulse, likely corresponding to a few inches, like 0.25 nsec, listen for its return for up the desired distance, then repeat.  It could repeat this transmission continuously, every 80 nsec (plus whatever little bits of time it takes to switch the system from receive back to transmit), but that would require blazingly fast signal processing to handle continuous processing of 80 nsec receive periods and I think it doesn't have to.  Things in traffic happen millions of times slower than that, fractions of a second, so it's likely it could pulse several times a second, say every 1/100 second, listen for the 82 nsec and then process the return. 

For looking a quarter mile down the road, 440 yards each way, that becomes listening for 2.64 microseconds. 

I'm not a "radar algorithms guy", so I don't have the remotest feel for how much processing would be involved, but allowing 1/100 of a second to complete the processing from one 82 nsec interval, and allowing the same or even a little more time to complete processing for a 2.64 microsecond interval doesn't seem bad.  

Asking what sorts of power they'd be transmitting starts to involve more assumptions than I feel comfortable making about what antennas they'd use, the antenna patterns, their gain, and far more detail, but some back of the envelope path loss calculations make me think that powers of "10-ish" milliwatts could work.  That shouldn't be a problem for anyone. 

Chances are you have, or know someone who has, a car with back up Sonar in it: sensors that tell the driver as they get within some "too close" distance to something behind them.  The senors are typically small round spots on or near the rear bumper that measure the distance to things behind the vehicle by timing the reflections of an ultrasonic signal (I've seen reference to 48 kHz) - they're the round black spots on the bumper in this stock photo.

Since the speed of sound is so much lower than the speed of light, the whole description above doesn't apply.  While I don't have experience with ultrasonics, it seems the main thing it gives up is the resolution of the radar, which is already finer than we need.  Ultrasonics might have their place in the way autonomous cars can be implemented. 


Saturday, November 18, 2017

Bubba Doesn't Just Gunsmith

Sometimes Bubba works on electronics.
At least a year ago, this APC Back UPS 1300 died.  The batteries, already a replacement set, wouldn't take a charge anymore. It has sat in the shop, upside down, waiting for me to do something about it. 

During Irma, another UPS started emitting the unmistakable smell of burning electronics.  We shut it down to troubleshoot it after the storm.  I pulled the batteries and did a life test on them with my Computerized Battery Analyzer (CBA-IV).  The batteries were fine.  Put the system back together and it ran for a while, then starting smelling like smoke again.  Not good.

So the UPS itself was scavenged for useful parts and the batteries put aside.  Yesterday, I put "2+1" together and put the two good batteries into the old but good UPS.  It seems to work fine and doesn't stink.  There's the small matter of the batteries being too big for the case, but that's kind of a feature.  It gives more back up time than the original.  If I wasn't quite as willing to live with the battery cover duct-taped on, I'd figure out how to make a new one.  I know!  I'll build a 3D printer from scratch to print a cover! 



Friday, November 17, 2017

Autonomous Cars - the Sensor Problem

In May of 2016, a Tesla car under "autopilot" control was involved in an accident that killed the person in the driver's seat.  Inevitably, whenever this accident is mentioned, someone feels the need to show up and say that no one is supposed to mistake autopilot for autonomous control.  If something goes wrong, the driver is responsible, not Tesla.  Nevertheless I find the accident instructive if we want to think about the kinds of problems autonomous cars need to get right all the time. 
In that collision, which occurred at about 4:30 in the afternoon on a clear day, a truck turned left in front of the Tesla which didn't brake or attempt to slow down.  This is the kind of thing that happens every day to most drivers, right?  Should be a priority to program cars to not kill people in this sort of scenario.  The Tesla's optical sensors didn't detect the white truck against the bright sky, and its radar didn't react to it either.
The Tesla went under the truck, decapitating the driver, then drove off the road onto a field near the intersection. 

It's not hard for a human with vision good enough to get a driver's license to see a truck against the sky background.  As I've said many times before, once a child knows the concept of "truck" and "sky" - age 3? - they're not going to mistake a truck for the sky or vice versa. 
Tesla’s blog post followed by Elon Musk’s tweet give us a few clues as to what Tesla believes the radar saw. Tesla understands that vision system was blinded (the CMOS image sensor was seeing “the white side of the tractor trailer against a brightly lit sky”). Although the radar shouldn’t have had any problems detecting the trailer, Musk tweeted, “Radar tunes out what looks like an overhead road sign to avoid false braking events.'"
The way I interpret that statement is that in an effort to minimize the false/confusing returns the radar sees In Real Life (what radar guys call clutter), which is to say in an effort to simplify their signal processing, the radar antenna was positioned so that its "vision" didn't include the full side of the truck.  It shouldn't be impossible to distinguish a huge truck almost on top of the car from large street sign farther away, by the reflected signal and its timing.  Perhaps they could have worked at refining their signal processing a bit more and left the radar more able to process the return from the truck.  The optical sensors have the  rather common problem of being unable to recognize objects.  On the other hand, we've all had the experience of a reflection temporarily blinding us.  Maybe that's the sensor equivalent.  

A recently created electronics industry website, Innovation Destination Auto, a spinoff of Electronic Design magazine, runs a survey article on automotive radars for the Advanced Driver Assistance System (ADAS) market.  There is a lot of work being done on radars for cars.  Radar systems for cars are nothing new; that has been going on for decades.  What's different this time is the emphasis on sensing the total environment around the car. 

It's all about enabling the car to know everything going on around it, which it absolutely has to do.

Electronic devices such as millimeter-wave automotive radar systems are helping to evolve the automobile into a fully autonomous, self-driving vehicle. The Society of Automotive Engineers (SAE) International has actually defined six levels of driving automation, from level 0, with no automation, to level 5, with full automation and self-driving functionality. Different types of sensors within a car, including millimeter-wave radar transceivers, transmit beams of energy off different objects within their field of view, such as pedestrians or other cars, and detect the reflected returns from the illuminated objects. Sensor outputs are sent to one or more microprocessors to provide information about the driving environment for assistance with driving functions such as steering and braking to prevent collisions and accidents.

Multiple sensors are needed for 360-deg. detection around an ADAS automobile. Often, this involves sensors based on different forms of electromagnetic (EM) energy. Automotive radar sensors typically incorporate multiple transmitters and receivers to measure the range, angle, and velocity of objects in their field of view. Different types of radar systems, even different operating frequencies, have been used in ADAS systems, categorized as ultra-short-range-radar (USRR), short-range-radar (SRR), medium-range-radar (MRR), and long-range-radar (LRR) sensors or systems.
The article is "Sponsored By" Texas Instruments, among the largest semiconductor companies in the world, and links to some radar Systems On A Chip they've developed for the automotive market. 

The different types of radar serve different purposes, such as USRR and SRR sensors for blind-spot-detection (BSD) and lane-change-assist (LCA) functions and longer range radars for autonomous emergency braking (AEB) and adaptive-cruise-control (ACC) systems. USRR and SRR sensors once typically operated within the 24-GHz frequency band, with MRR and LRR sensors in the 77-GHz millimeter-wave frequency range. Now, however, the frequency band from 76 to 81 GHz is typically used, due to the high resolution at those higher frequencies—even for shorter distance detection.
It seems to me that these are going to be fairly simple systems with low power transmitters and receivers.  Even the "LRR" (long-range-radar), shouldn't be too demanding on design.  There's a lot of variables I'm sweeping under the rug here, but a car needs to see a few hundred yards at most, and the demands on those radar transmitters and receivers don't strike me as being severe.  

This is just the beginning.  Truly autonomous cars should probably communicate with each other to work out collision avoidance similar to how aircraft do.  It has been proposed.  It should be easier for cars.  Cars can stop.  Aircraft can't.
After the August eclipse, there were reports of horrific traffic jams in several places.  I know I posted about it, as did Karl Denninger and some other people.  What this means is that the road infrastructure is incapable of handling the traffic when it goes above some normal range.  I recall hearing that in a metropolitan area, like around Atlanta where there always seems to be trouble, adding lanes to the interstate costs millions per mile.  No sooner are the lanes built than more lanes are needed.  One of the attractions of autonomous cars is that they should be able to drive higher speeds in denser patterns, getting the effect of more carrying capacity in the highway without adding lanes.  Since they're all communicating with each other, chances of an accident should drop precipitously.  I think that's one reason the governments seem to be pushing for autonomous vehicles. 

About That Whole GQ Story

There's a lot of buzz over GQ picking Colin Kaepernick as their "Citizen of the Year".
I haven't said anything but I just want to pass on what I think is going on.  The easy one to shoot for is that their political beliefs align with his.  I think that's secondary.  The big reason is that GQ is failing, like most magazines, and it has been years since anyone has said "there's a lot of buzz over GQ" or since GQ has made news at all.  If they ever have. 

There's a quote attributed to PT Barnum that "there's no such thing as bad publicity", and I think they're just dying for people to notice they're still around. 


Thursday, November 16, 2017

Another Star Talks About Harassment

Showbiz legend Kermit tells of what he had to endure to get his break in Hollywood.


This in the wake of Bugs Bunny's revelations on Virtual Mirage


Wednesday, November 15, 2017

Is There a Future Role for Humanoid Robots

Remember Marilyn Monrobot and founder Heather Knight from back in 2011?  Dr. Knight believed that service robots which interacted with people would need to be humanoid, and if they needed to humanoid they would need to be more human and less creepy.  Her phrase was "Devilishly Charming Robots and Charismatic Machines," and she worked on social aspects; programming robots to interact more like people.  She even did a standup comedy shtick routine with a robot in which the robot adapted its jokes to the audience reactions.  Robotics researchers talk of something called "the uncanny valley":
Part of her mission is to address the so-called "uncanny valley" -- a moniker used by roboticists to describe the phenomenon wherein humanoid robots give the creeps to real humans (which most of you probably are).
Robots, of course, have been moving into industry since just about forever and I think no one ever uses the terms charming or charismatic for industrial robots.  Utilitarian at best.  Furthermore, more robots are coming.  According to the Boston Consulting Group, by 2025, robots will perform 25% of all labor tasks.  Robots are becoming better, more capable and cheaper.  The four industries leading the charge are computer and electronic products; electrical equipment and appliances; transportation equipment; and machinery. They will account for 75% of all robotic installations by 2025.

Machine Design presents this breakdown of the market:
In a recent report from Berg Insight, the service robot base is expected to install 264.3 million units by 2026. In 2016, 29.6 million service robots were installed worldwide. The robots in the service industry broke down into the following groups:
  • Floor cleaning robots accounted for 80% of total service robots, with 23.8 million units
  • Unmanned aerial vehicles accounted for 4 million units
  • Automated lawnmower units tallied 1.6 million units
  • Automated guided vehicles installed 0.1 million units
  • Milking robotic units tallied to 0.05 million units
The remaining segments included humanoid robots (including assistant/companion robots), telepresence robots, powered human exoskeletons, surgical robots, and autonomous mobile robots. Combined, they were estimated to have had less than 50,000 units installed.

Humanoid robots, while being one of the smallest groups of service robots in the current market, have the greatest potential to become the industrial tool of the future. Companies like Softbank Robotics have created human-looking robots to be used as medical assistants and teaching aids. Currently, humanoid robots are excelling in the medical industry, especially as companion robots.  [Wait ... "milking robotic units"?... Robotic milking machines? ... Sometimes I wish I could draw cartoons - SiG]
One might ask why?  Why should humanoid robots take over in so much of the world?  In industrial design, it's often the case that "design for test" or "design for manufacturability" means spaces are left around connectors so people can fit their hands in there.  Entire "human engineering" (ergonomics) specifications exist with typical hand sizes, typical arm lengths, and so on, so that it can be worked on by humans.  We're not talking about how close the keys on a keypad are, that's for the users.  We're talking about how close the hardware is to other features inside the box, where users don't go.
Softbank Corp. President Masayoshi Son, right, and Pepper, a newly developed robot, wave together during a press event in Urayasu, near Tokyo, Thursday, June 5, 2014. (AP / Kyodo News)

Softbank Robotics' sorta-humanoid robot Pepper  looks like something Dr. Knight would do (or research).  Pepper is far enough from looking human to be creepy. 

If humans aren't going to work on the product, why design it around a human assembler?  Why not design the thing for the optimum size and internal functions and design a special robot to assemble it?  If the robot is going to do the work, it doesn't have to have human sized hands or look human. Witness the daVinci surgical robots, which certainly aren't humanoid. 

On the other hand, if the human and the robot are going to be working side by side, that's the only reason to have the robot proportioned like a human.  Machine Design references Air Bus, saying they want to hand off some tasks that are currently done by humans to robots.
By using humanoid robots on aircraft assembly lines, Airbus looks to relieve human operators of some of the more laborious and dangerous tasks. The human employers could then concentrate on higher value tasks. The primary difficulty is the confined spaces these robots have to work in and being able to move without colliding with the surrounding objects.
A potential exception to that is the often-talked about use of humanoid robots as helpers for people with reduced mobility or other issues.  I don't think I care if the robot that picks me up out of bed 30 years from now looks particularly human, as long as it doesn't drop me.  On the other hand, there seems to be evidence that robots that look more human and capable of mimicking emotions can be useful with some patients. 
University of Southern California Professor Maja Matarić has been pairing robots with patients since 2014. Her robots helped children with autism copy the motions of socially assistive robots and, in 2015, the robots assisted stroke recovery victims with upper extremity exercises. The patients were more responsive to the exercises when promoted and motivated by the robot.
While the number of humanoid robots needed will very likely be small, there's little doubt that the future is very bright for robot makers and the people who will program them.

A prototype assembler robot for Air Bus.  The ability to climb a ladder like that is important to them. 

Tuesday, November 14, 2017

Three Days from Done?

Or done now?  My Breedlove is now presentable in polite company.  After determining that the cured polymer coating on it is insoluble and not going to be damaged by anything I can put on it, I bought a can of high gloss Minwax Polyurethane spray and did four coats today.


The wood itself is gorgeous.  It's quilted maple, stained with a mix of water soluble Transtint Dyes, blended up on a practice piece to see how I liked the color. Quilted maple reflects light differently with every move and that pattern is you see is only there with the light from that angle.  Despite looking wavy and almost bubbly, it's flat and smooth.  The wood was gifted to me by reader Raven, who offered it in reply to my late June post about putting a clear plastic side on this guitar.  Couldn't have done it without your help!

All in all, the finish looks pretty good, but not as "deep" or glossy as the factory finish.  The spray can instructions say to spray a light coat every two hours, and by the time I fussed over a detail that I didn't like, I got started close to 10AM.  Two hours after the third coat, I lightly sanded with 500 grit, cleaned with mineral spirits and shot a fourth coat.

My experience with the finish compatibility test over the weekend says this won't reach maximum hardness until late tomorrow at the earliest, in line with the can's warnings not to use the item for 24 hours.  Three days comes from the other instruction on the label saying:
Recoat within 2 hours. If unable to do so, wait a minimum of 72 hours, then lightly sand and recoat.
That says I could add more finish on top of what I have on Saturday.  My tentative plan is to try to buff the guitar with a mild polish.  Not rubbing compound but something beyond pure wax.  Tool Junkie Heaven for guitar techs offers electric buffers or foam polishing pads.  The pros use something like their buffing systems:
If the polish doesn't help, maybe I need to repeat adding three or four more coats on Saturday.

I can't do anything to it for now, so in the meantime, it's on to other projects.



Monday, November 13, 2017

An Introduction to Feedback Systems

I plan to talk more about technical topics in the coming weeks, and one of those topics that's worth covering is an introduction to Control Systems, commonly called Feedback and Control Systems. 

I know that some of you are already really familiar with these things, in great and gory details.  Take the night off (unless you care to look for mistakes to correct).  For my career in radio design, I principally designed feedback and control systems like Frequency Synthesizers (Phase Locked Loops - PLLs), Automatic Gain Control (AGC) systems in receivers, Automatic Level Controls (ALC) for transmitters, and Cartesian Transmitter Linearizers.  For those of you who haven't looked into the subject, electrical engineering students take a class in control systems that tends to be a very analytical and very mathematical.  I'm going to skip the math and try to explain things in words.  I'll also be the first to say that when you design your 10th or 15th PLL or AGC, there's not much theory involved.  It's pretty much you just solve a few equations (which you've probably stuck in a spreadsheet or other software) and you're done.

For starters, let's define a feedback system.  It's a system that corrects itself by comparing its state ("what we got") to its desired state ("what we want it to be").  This diagram is a simple type of feedback system.
For some folks, this is probably more confusing than helpful, so let's do a simple example of something that everyone knows: a thermostat in an air conditioner.  The thing we're controlling is the temperature at the thermostat, which we use as a proxy for the temperature everywhere under that air conditioning.  The feedback sampler is a thermocouple or something that measures the temperature - the equivalent of a thermometer.  The heart of the loop is that circle with an X in it, which acts to compare the "what we want it to be" (the thermostat setting) to "what we actually got".  It compares the two electrical signals and generates an error signal ("what we got" minus "what we wanted") that goes to something called the feedback controller here and that makes the feedback correction.  In a thermostat, this is an on/off switch.   

It's important to notice that if the output is bigger than we wanted, the control system turns it down, and if it's smaller, the system turns it up.  Since the correction is opposite the measurement, this is called negative feedback.  If you think of the audio screams and howls that happen when a microphone is in front of a loudspeaker, there is no correction and the output gets louder until it can't go up any more.  This is a type of positive feedback, not a control system.  Perhaps you've heard of the term vicious cycle, where something happens and its result is to contribute to causing it again: that's a positive feedback situation.

What I'm going to describe next is how my central air conditioner works.  I don't know how universal this is, but I've watched my thermostat and know that if I set some temperature, say to cool the house to 75 degrees, it won't turn the air conditioner on to reduce temperature until it measures 2 degrees above the desired temperature, 77.  The error has to be that big before it will turn on.  The air conditioner then comes on at full power until the temperature at the thermostat reaches the desired temperature, sometimes it overshoots and goes a degree lower than the thermostat is set.  There has to be some sort of difference (called hysteresis) in the temperature between when its' getting warmer and when it's getting cooler.  There's no way the system could know to both turn on and turn off when it's 75.

This is what's called a Bang-Bang controller.  It either turns on the cooling 100% or it turns it off (0%).  That's a pretty crude system, and lots of control systems you're familiar with don't work that way.  Consider cruise control in a car: if you had a Bang-Bang controller your accelerator would go full throttle or to idle.  A Bang-Bang controller works for a simple thermostat, but in a cruise control the demands for accuracy are higher, and we want something that doesn't continually speed you up and slow you down by a couple of MPH. 

There are much more elegant control systems available, and the smoothest response is generally the Proportional-Integral-Derivative or PID controller.   A PID controller calculates three quantities and then combines them to create the error correction needed.  Hopefully, this graph will help explain it while I add some words.
Proportional to error means that the bigger the error, the harder it tries to correct.  Integration of errors over time is an averaging process - it means that the result is incorporating both how big the error is and how long it lasted; its output gets bigger if some error has lasted longer, not just if the error is larger.  Finally, Proportional to the rate of change term is determining if the error has gotten larger quickly or slowly. 

The drawback of PID systems is that they're complex and can be hard to get running well; in many cases, an error signal that's proportional to the error is all that's needed.  I don't recall ever seeing an AGC, PLL, ALC or any other electronic control system that used a PID controller.  On the other hand, this is something that the continuing advancement of electronics has vastly improved, and for some control tasks, like the temperature of furnace, kiln or for some operation, you can buy a preprogrammed, ready-to-use PID controller for well under $100, and sometimes under $20. 

Proportional or PID systems are starting to make their way into air conditioners.  We have a Mini Split system in the workshop and it behaves that way.  Instead of the unit turning off when the temperature is cooler than the thermostat is set for, it cools at the lowest energy consumption it can. 

A Fun Fact is that PID controllers were first developed for automatic steering systems of ships at sea in the 1920s.  They make an obvious choice for automatic steering systems in an autonomous car or truck.  The details of how the car decides "what we want" and measures "what we got" are monstrously huge problem that have to be solved. 



Sunday, November 12, 2017

This Guy Needs a Little Econ 101

Cartoonist Bob Gorrell:
Once more from the top.  Corporations do not pay tax.  Corporations collect tax.  Since the bulk of their sales is to middle class people (because they're the majority of the country), the middle class taxpayers pay the corporations' taxes.  Corporate tax cuts are middle class tax cuts. 

Corporations don't have a penny that wasn't made through sales.  Consider a local "mom and pop" pizza parlor.  Their sales price consists of two major things:
  • Cost of materials and the labor to make it sellable (for example, pizza ingredients plus the labor to make them into that delectable Food of the Gods) 
  • Overhead (for example, electricity for the oven, rent for the building, napkins, other supplies, and taxes/fees mandated by the various government levels that regulate.  Things like insurance payments for workman's compensation, unemployment insurance and an increasingly longer list of fees/taxes.) 
Those are absolutely mandatory expenses that must be covered.  Maybe not minute by minute, but if those expenses aren't covered, the business can't survive. 

Whatever rate the Feds tax at, that's before profit and part of what must covered by price.  That tax is paid by the pizza buyer.  If the percentage of taxes drops, the price of the product can drop, or it can leave more money in the business for expansion.  Ultimately, both of those outcomes are good for the pizza buyer.


Saturday, November 11, 2017

Veteran's Day Tribute 2017

There really isn't anything I can add to the quality writing exhibited around the blogs today.  I came of military age while the Viet Nam war was winding down and the draft switched over to a lottery system.  My number was high enough that they didn't get to me that year, and because no one in my family had a history of enlisting, it simply never occurred to me.  I had one friend who went to the Air Force Academy, but that just seemed different.  So while I'm not a veteran, I have deep respect for those of you who are.  You allow me to sit here and write this. 


There's a million good images to use, but this one speaks to me.

Friday, November 10, 2017

Extreme Modeling

A few weeks ago, when I wrote about the new Blade Runner 2049 movie, I emphasized the virtual world they created.
It may be the most visually stunning movie I've ever seen.  The design of the futuristic dystopian world they create is really incredible.  It's literally world creating, not set creating.  The sets - however much is CGI and how much is carpentry and paint I can't say - are simply amazing.
Today I stumbled across a bit more information on this.  The 3D CAD program I use the most, Rhino3D, has a user forum that I visit perhaps once a week.  Today, someone pointed out that in this video, there's evidence that Rhino was used in the creation of the world of Blade Runner.  Sure enough, there's a brief glimpse of a computer screen with program menus at 53 seconds in this video.  I doubt it lasts a half second.  It could well be Rhino, but I'm not familiar enough with other tools to know that it's the only program it could be.


Whether it's Rhino isn't the point of this post.  The point is a look behind the scenes at the utterly amazing models they built.  Then notice the sheer number of artists and model builders that they employed.  Notice their incredible attention to detail.  Not to mention the physical size of these models.  There's a spot in the video (around 3:10) where production manager Pamela Harvey-White says of one set of models, "they're "bigature", not miniature.  They're just massive buildings".

Honestly, I didn't know this sort of "movie magic" was still done these days.  I can never tell anymore whether I'm looking at actors in front of a green screen with everything drawn in around them, or how much of the set is really there.  I mean, if it's like Guardians of the Galaxy and someone is talking to Rocket the talking Racoon, that one's pretty obvious.  How much of the rest of the scene is there? 

 One of the commenters on the Rhino Forum says
Actors have a really hard time relating emotionally to a story in greenrooms, acting with placeholders in greensuits, so the trend is to let them have as much real scenery around them as possible. And the same goes for directors and photographers, it is much easier to be in the scene with a miniature as you can light it and tweak it hands on, instead of having to ask a “teenager” if “this” or “that” is possible.
Everyone talks about AI and Robots taking over everything, right?  Are they going to imagine scenes like these or the virtual world of Blade Runner?  I'll believe it when I see it.

So let me leave you with this fun fact: the actor in full body makeup with his hand on the greensuit guy sitting for Rocket is David Bautista.  He's in the opening scene of Blade Runner 2049.


The First Actual Chainsaw Bayonet

I was sure I still had this picture, but I had to go do an image search to find it.  The actual chainsaw bayonet.


OK, maybe it's not the actual first chainsaw bayonet; it's just the first picture I ever saw.  I thought I had it on my computer, but I searched pretty extensively and couldn't find it.  Must have said, "I'll never need that picture again" during one of my cleanup frenzies.

Hat tip to MRCTV - the Media Research Center - which includes cringe-inducing video of the saw in action and this chainsaw carrying assault junk Huffy bike.



Wednesday, November 8, 2017

While Searching for my Next Engine to Build

I came across an interesting post on Pinterest: a Stirling engine-driven battery charger. 
If you haven't already noticed, that's a rendering of a 3D model, not a photo of an engine, and when I went to the original source, Interesting Engineering, it became apparent it's vaporware.  The original rendering and short little article was by a three Mechanical Engineering students who graduated before building this. So no plans and no numbers to understand the problem.

Stirling engines are nothing new, Scottish Clergyman Robert Stirling invented the concept in 1816, and they're very popular among home machinists.  Stirling engines are heat engines that operate by cyclically compressing and expanding air - a good explanation here and a good animation of how they work here.  A well built Stirling engine can turn a good sized fan blade, or spin some sort of flywheel.  There are models which will work on the warmth from your hand

I can find that a well built Stirling should achieve efficiency around 66%, but what I can't find is how to design it or how big it needs to be (physically) to deliver a desired amount of power.  In this application, you know that USB chargers are typically either 5W or 10W.  In my mind, that tells me if I want the engine to charge things at the 10W level, it needs to be sized to put out more than that.  How much more?  Since there will probably be some inefficiencies in the circuitry, lets say the generating system is 75% efficient, that means the input to the generator needs to be 10/.75 =  13.3W and the input to the engine would need to be 13.3/.66 = 20.2W.  To me, that ought to be how the size of the flywheel and the piston are calculated.

There's a handful of videos that pop up on YouTube of folks using Stirling engines as phone chargers. I don't know that they're honestly 20W charge rates, but some folks are claiming nearly that. Time for a little more research. 



Tuesday, November 7, 2017

A Little Update on Son of Side Project

I don't want to jinx myself, but I could be getting into the last little bit of the project I call Son of Side Project; adding a wood side to my Breedlove demo guitar that was introduced to Mr. Bandsaw and hasn't been complete in perhaps a decade.

In that last update, I had gotten to the point where I was ready to glue on the internal wood strips (kerfing strips) that would hold the side.  I eventually got those glued in place.  I had some issues with a couple of pieces of it shifting before the glue would hold it.  As a result, I had to learn how to loosen fish glue and re-stick the pieces.  I replaced one.
Once the side was glued on, I took it down to my friend who has a fixture designed for routing the top of the guitar to add the trim binding strips.  The fixture uses a small, trim router and ball bearing collars to allow an exact width and depth channel to be cut.
This is how I brought it back last Wednesday.  There was an unfinished end like that on the front and back of the guitar where we didn't push the router too far.  So how to cut those off and extend the channel the rest of the way?

Rather than try to do this with power tools, like the rest of the channel for the binding, I went for hand shaping the wood.  I clamped a piece of wood to the top of the guitar as a guide and used a 1/4" wide woodworking chisel.  I cut straight vertically leaning the chisel against the guide to form the back wall of the channel.  The flat back of the chisel would form the new back, and I pushed the chisel in a little at a time (no hammer!), then went in parallel to the top from the side taking out small chips to remove the spruce I had just scored with vertical cuts.  It's a lot like chiseling out the waste in a dovetail joint, parts of a mortise, or pretty much anything. 

This picture is preliminary, before I finished cleaning up the channel, and it's magnified much more than I was seeing with Optivisors on (which is why I took the pic).  That step from top to the floor of the channel is 1/16".
Then it was time to start gluing up strips to try and match the colored strips you see in cross section in the corner.  I've done this a couple of strips at a time, and now have that top strip in place. 
The tape is in place as a "clamp" for 12 hours, so I can take it off in the morning.  There's a mess of to cleanup here, and some gaps that need to be filled with something or other.  The sides and back trim need to be done as well, but that will be simple compared to this top strip. 

Like I say, I don't want to jinx myself, but I'm making progress. 


Monday, November 6, 2017

Thor Ragnarok - Tons of Light Hearted Fun

I've been looking forward to seeing Thor: Ragnarok, the latest installment in the Marvel Cinematic Universe and third Thor installment since the first trailers started leaking out last April.  It looked like it was going to be fun and it turns out that's an understatement.  Aside from the very, very rare comedy that leaves me laughing until my sides hurt, I don't think I've laughed as much at a movie. 

The name, simply pronounced rag - na - rock, is the mythical end of days for Asgard, the kingdom Thor is from.  The movie, then, is about the apocalypse, and is done extraordinarily light-hearted, which is probably a good way to handle the End of Days, ("it's the end of the world and I feel fine").  There are one or two light-hearted scenes in the trailers - the big one at the end with Thor recognizing the Hulk, whom he's supposed to fight to the death, and breaking out in a smile shouting "We know each other!  He's a friend from work!"  The first scene in the trailer is shown as deathly serious, but actually isn't that way in the movie.  Something that stood out in my mind was the scene in that trailer that features actress Cate Blanchett as Hela, the Goddess of Death and the villain of the movie.  The scene was changed from the first way we saw it. 
In the first trailer, this scene takes place in what looks like a big city alleyway.  In the movie, this takes place in an open green field (as you can see), which is supposed to be in Norway.  Thor hurls his hammer at her, which is supposed to be impossible for anyone not pure enough of heart to lift, and she not only catches it one handed, she destroys it.

Besides the regulars of Chris Hemsworth as Thor, Tom Hiddleston as Loki and others you'll know if you've seen the first two, the move features Blanchett (who seems to be a very versatile actress), Jeff Goldblum who is masterful playing some rather eccentric people.  Probably the only bow to political correctness is making Tessa Thompson, who's apparently a small, African American woman, the Valkyrie.  Valkyries are from Norse mythology (as is the whole Thor universe) and usually depicted as tall, blond, Norse women.  Ignoring that, Thompson does well in the part.

Y'all know I'm a fan of these light, comic book fantasy movies, but I give this a full five stars and am even thinking of seeing it again.



Sunday, November 5, 2017

The River of No Returns?

Most of you will recognize that as a common nickname for Amazon, the biggest online shopping force there is, for its history of raising stock price while never paying returns (dividends) to investors.  Full disclosure: I don't own Amazon stock, but I'm a Prime subscriber, which means it seems like a good enough deal to pay for.  At least for the last couple of years.  Whether or not that continues is a subject for another day.

One reason Amazon comes to mind this weekend is because while we took a look at Elon Musk's Tesla Motors and its financial woes Friday, we also hear that Jeff Bezos is the richest man in the world, thanks to Amazon stock holdings.  Today's news is that Jeff sold $1.1 Billion worth of stock yesterday; I guess he needed some new slipcovers for the couch or something.

It might be reasonable to ask exactly what Jeff Bezos has done for the world to warrant being the richest man.  Has he invented anything, developed a new drug, a new alloy, or something that makes life better?  Has he made life better by composing new music or created a new comedy?   Well... maybe he's helped make it more convenient in an era of decreasing leisure time.

Unlike Bill Gates, who was the wealthiest man for a long time, Bezos didn't really invent anything new that I can tell.  It has probably been over 30 years since Gates ever slung code, but he did at one time.  Microsoft has arguably created wealth in the sense that it created new products from the intellects of its engineers.  Think of developing code as refining ideas akin to refining ore out of the ground into metals.   Amazon, by contrast, has made shopping more convenient in some ways, and helped define ways that online shopping works, but their main product seems to be convenience.  A few months ago, Mrs. Graybeard and I needed a covered butter dish.  After forgetting it during a Walmart run, we added a "two-fer" pair of Rubbermaid butter dishes to an Amazon order for something else, knowing that because of our Prime membership we'd have them in a few days.  Instead, a small delivery service - probably formed for such contracts - laid them on our doorstep early on Sunday morning - two days later.

As equal time might warrant, after looking critically at Elon Musk my same source, Bill Bonner's Diary took a look at "The World's Favorite Stock".  Some of what he said surprised me.
Two friends and colleagues (both former Wall Street insiders), Rob Marstrand and David Stockman, examined Amazon carefully and independently. They came to the same conclusion: The company loses money in its core retail business.

It disguises the losses with low taxes, capital leases, and accounting tricks – but the losses are real… and unavoidable.

And since that business model requires super-low prices, there is no way for the company to fatten its margins. It can’t make up on volume what it loses on each sale.

In other words, the business model is a failure.

That is no sin… and perhaps no surprise; lots of businesses never make money.

But there is something about the Amazon phenomenon that is truly remarkable – like a plant that needs no light… or a mammal that needs no air.

Amazon is one of the strangest creatures ever to lurk in the capitalist ecosystem.
The second reason Amazon comes to mind is this article that a friend sent me from CNBC: "This 28 Year-Old's Company Makes Millions Buying from Walmart and Selling on Amazon".  To be honest, the fact that it was on Amazon and not eBay was the only thing that struck me about this.  He's doing things with Amazon that I had no idea were available options for a small business.

When 28-year-old Ryan Grant was at Winona State University in Minnesota, he came up with a side hustle to make ends meet.  Twice a year, he organized textbook buyback events on campus.  He listed the books on Amazon and shipped them out to customers around the country for a profit of up to $10,000 a year.  He could use an Amazon provided app to determine what to pay for the used books and list them for sale.  All fine.  The problem was it was very labor intensive packing and shipping books around the country.  The duplex he was renting was full of books and hard to walk around in.

Enter Amazon again.  Their fulfillment services meant Grant could ship all the books in bulk using preferred UPS rates to an Amazon warehouse, where, for a fee, the online retailer handled processing and shipping out each individual order. It made his side hustle more manageable, time-wise.

After graduating, Grant started working in accounting.  He eventually became discouraged with that career field and while trying to think about a new career got the idea to renew his side hustle, only this time with a bigger focus than just textbooks.
After work and on the weekends, he scoped out the clearance aisles at Walmart, scanned a few items using Amazon's app and bought up toys, games, and home improvement items he realized he could re-sell for a profit. A receipt from his early days shows a variety of purchases, everything from vacuums to Barbies, LEGO sets to stainless steel flatware.
The article goes on to describe how he turned part time work on the side, pocketing about $1000 per month quit his accounting job in 2013 and developed an $8 million/year business.  
"I went from just me in this business doing around three-to-five thousand dollars in sales per month and now, four years later, we're a team of 11 and we're doing well over $200,000 in sales per month," Grant says. The team had to move to a warehouse that's over five times as large as their first this past July.

Since he started selling on Amazon, Grant says the business is on track to top $8 million in total sales by the end of this year. Profits are heavily reinvested back into the company, though Grant was still able to take a salary of around $150,000 when he was working for the venture full time.
(Ryan Grant - CNBC photo)

And more power to him.  I have absolutely no problem with someone buying at Walmart and selling on Amazon: willing buyers and sellers and all that.  If you shop around and buy it at what you think is the best price, why would you care if he's buying it on clearance and selling it for a profit?  It's especially ironic considering the online "e-tailer wars" between Walmart and Amazon.  It's no secret Walmart is trying to be the dominant e-tailer.  Walmart is trying to vanquish Amazon and hear the lamentations of their women. 

As I said, the only thing about this that really surprised me is that the "side hustle business" was on Amazon and not eBay.  Over the years, eBay has turned from a place where people sell their leftover Beanie Babies or Pez dispensers (how it started) to a 24/7 flea market populated by a lot of small businesses.  Sure, you can buy someone's old Tee shirts, their ham rig, guitar, bicycle or what have you, but when you search listings, you find a lot of small businesses.  I think some buyers just prefer to buy from a shop over Some Dood selling his stuff.  We have known people who hustled around town to buy things and sell them on eBay.  On the other hand, we've bought some tools and other small things from an eBay seller after comparing them to Amazon or other suppliers we know and had the item arrive here in an Amazon fulfillment box.

How do these stories tie together?  Could it be that the most important thing Amazon has done is sell its fulfillment services and their app that helps find prices to help small business people figure out a market? 


Saturday, November 4, 2017

God Help Us All

In the Austrian city of Salsburg, they're putting airbags on lampposts because of people walking into them while looking with their phones.
Lampposts are being covered in airbags to stop so-called 'smartphone zombies' bumping into them as they walk around staring at their screens in an Austrian city.

Salzburg authorities say tourists are increasingly hurting themselves by not looking where they are going while checking their devices.

Locals have described mobile phone users as Smombies, the short form for a 'smartphone zombie', and civic chiefs are taking action to stop them getting injured.
They call this an educational program to get people to be more aware of the world they're in and they believe it will change behaviors.  I doubt that.  This is being run by the city Board for Traffic Safety (KFV in Austrian) so that tells me the tax payers are paying for this (you can see their logo in the bottom right of the sign on the lamppost).

Actually, the worst part of this story is that Salsburg isn't the only place.
In China, there are special sections of certain pavements that are reserved for people using telephones and walking at the same time.

In Honolulu anyone crossing the road and looking at their phone will be fined.

In the German city of Augsburg they have started putting traffic lights on the ground where they can be more easily seen by people staring at smartphones.
I don't know what to add to this. 


Friday, November 3, 2017

Is Tesla Motors Heading for Collapse?

That's the provocative idea from financial anlyst and publisher Bill Bonner in one of his daily letters this week.

The reasons are simple and financially obvious.
The company’s market cap – the value of all its outstanding shares – is at $53 billion. That’s higher than Ford’s market cap, despite Ford making profits of $4.6 billion last year and Tesla losing $67 million.

And Tesla has almost as high a market cap as GM, which had more than $9 billion in earnings last year… and which sold 10 million vehicles versus Tesla’s 76,000. ...

And the losses are getting bigger; a record $619 million in capital disappeared in the last quarter alone.
Graphically, from farther down the piece:
There is no rational reason for Tesla to have a market cap 80% of GM's when they ship 3/4 of 1% of the number of cars GM does (76,000 vs. GM's 10,000,000).  If the reason for the stock market price isn't the pure financials of the company, then it becomes the personal mystique of Elon Musk.  A Cult of Personality around the owner of the company.

Anybody remember John Delorean and his Delorean Motor Cars company?  The same sort of story 40 years ago in the 1970s.  The Delorean was sleek and futuristic with its gull-wing doors.  With a largely stainless steel body, it didn't require the painting and finish upkeep that entails.  It was the car that had to be chosen for "Doc" in the "Back to the Future" movies.
(image source)
While John Delorean might have been visionary, he never returned on investors' money. 
Poor Mr. DeLorean launched his car company in 1975. Over the next years, he took in – and mostly destroyed – approximately $100 million.

Toward the end, he was so desperate for financing that he was an easy target for the feds’ entrapment program. Claiming to be investors, FBI agents coaxed him into talking about importing $24 million worth of cocaine into the U.S. and taped the conversation.

DeLorean beat the rap, but DMC went bankrupt in 1982.

Later, he faced challenges from investors and car buyers and was driven into personal bankruptcy, too. He lost his home in 2000. Then he lost his mind… suffering strokes, which killed him in 2005.
Much like reasoning that you don't want to turn your government over to a charismatic con man, you don't want to invest in a company run by a charismatic con man.  Yeah, I know, times are different now: nobody invests for the long term on the company's actual value.  They buy stocks, let them pump up some amount, sell them to a bigger fool and move on to the next target.  By any conventional measure of stock values the entire stock market is overpriced and due for a major correction. 

Is Elon Musk that charismatic con man whom we shouldn't invest in?  I know some of you think so; you've said so here in comments to other articles.  There's something behind the market's enchantment with Tesla and it's not the detailed financials.

Conclusion to Bill Bonner:
“When will Tesla’s stock promote finally implode?” asks Kupperman.

Answering his own question: “When people realize that it’s a cash incinerating vanity project for Elon Musk, at a time when newer, better products are coming to the market. That point is coming soon. Very soon.”



Thursday, November 2, 2017

Check Your Fire Extinguishers

I'm not sure everyone has heard about the massive Kidde fire extinguisher recall, but Kidde is the giant in the field and more than 40 million fire extinguishers are affected.  There are two lines of fire extinguishers affected: plastic-handled, and push button. 
The US Consumer Product Safety Commission announced the recall of more than 40 million Kidde disposable fire extinguishers Thursday, saying they may malfunction during an emergency.

The faulty extinguishers are equipped with plastic handles and push-buttons and can become clogged. Their nozzles also may detach with enough force "to pose an impact hazard," the CPSC said.

The recall covers 134 models of Kidde plastic-handle fire extinguishers manufactured between 1973 and August 15, 2017, including models that were recalled in 2009 and 2015. It also includes eight push-button models manufactured between 1995 and September 22, 2017.
The company has set up a web page with a couple of pages to help you determine if your fire extinguishers are included - Kidde labels their products for dozens of companies.  We just spent the evening inspecting the four we have in various places around the house, and all but one are affected.
Replacements can be applied for online.
If Kidde determines that you have an affected model, the company says it will send you a replacement within 10 to 15 business days. The new extinguishers contain metal parts instead of plastic.

Wednesday, November 1, 2017

NASA Prepares a Probe to Get Closest to the Sun Ever

I hereby promise not to make the joke that they'll fly at night.

NASA scientists and engineers at Goddard Space Flight Center are preparing a mission to observe the sun from closer than any probe ever flown.  For comparison, Mercury orbits the sun at a distance of 36 million miles.  The new probe, named after solar astronomer Eugene Parker, will navigate a complex series of orbits which all fall inside Mercury's orbit.  The final orbit will be 3.7 million miles above the "surface" of the sun, in the super heated corona. 
[NASA is]  putting the finishing touches on the Parker Solar Probe, a 9-ft., 10-in.-tall, 1,350-lb. spacecraft that will take off sometime between July 31 and Aug 19 of next year on a 6.9-year mission to explore the sun.  The probe will be sent on its way atop a second stage mounted on a Delta IV heavy Lifter. It will go into a fairly eccentric elliptical orbit around the sun (see figure below). It will orbit the sun 24 times, and on seven of them will make flybys of Venus to increase its speed and tighten its orbit. The last orbit around the sun will take only 88 days traveling at up to 450,000 mph.
That speed of 450,000 mph is the fastest any man made object has flown by a large margin - the previous record holders were NASA's Helios I and Helios II probes (1974 and 1976, respectively) at 157,000 mph (253,000 km/h).  Still, Parker's 450,000 mph is .00067 of the speed of light.  At that speed, the nearest stars are over 6400 years away.  Add in the fact that we have no known way to generate that speed without multiple gravitational slingshots.
The probe will carry instruments to study magnetic fields, plasma and other energetic particles; it will also image the solar wind, a stream of ionized gases that can blow through the solar system at more than a million mph. The probe’s three major scientific objectives are to:
  • Trace the flow of energy that heats and accelerates the solar corona and solar wind.
  • Determine the structure and dynamics of the plasma and magnetic fields that are the sources of the solar wind.
  • Explore mechanisms that accelerate and transport energetic particles.
 
Of course, the danger here is the heat from the sun, so the probe will be positioned to always have its heat shield between the sun and the sensitive instruments. 
The primary bulwark against the heat is the Thermal Protection System, an 8-ft.-diameter, 4.5-in.-thick heat shield made of carbon composites that will protect most of the spacecraft’s components from the full brunt of the heat—up to 2,500°F. It is imperative that the probe maneuvers and changes its attitude to keep the TPS between the sun and the probe’s internal components. But the nine-minute lag between when radio messages on position and speed are sent and received at controllers on Earth—and another nine minutes for the adjustments to be radioed back—meant that engineers had to make the Parker as autonomous as possible and able to maneuver on its own to keep the TP correctly positioned. This makes the probe one of the most autonomous spacecraft ever built by NASA.
The 4.5-inch thick carbon composite heat shield isn't enough.  The reality of powering the spacecraft means that it has to deploy solar panels, photovoltaics, to convert the intense sunlight into electrical power.  NASA learned during the Mercury Messenger program how to manage the heat that ultraviolet radiation in sunlight causes in photovoltaics.  The heat eventually degrades the solar cells to unusable output levels.  The Parke probe's solar cells will be mounted to a metallic channel that will have cooling water pumped through it, to reduce the temperature of the cells. 
The probe will also use its autonomous capabilities to keep the adjustable solar arrays or wings as shielded as possible, but still exposed enough to generate power when needed. This is critical: NASA estimates that at some times during the probe’s mission, a change of only one degree in the solar arrays’ wing angles would call for a 35% increase in the liquid cooling subsystem’s output.
I find it cool that plain old deionized water is being used on a mission to the sun to keep the solar panels below a temperature of  320F. 

The mission has a web page with more interesting background, as does Machine Design