Back-of-the-Envelope N2 TEA Laser Beam-spot. Modifications to both capacitor and electrode rails improves focus, as well as a small optical-grade mirror at back of laser seems to improve beam focus and quality. Laser runs now with two layer acetate dielectric, which allows higher sustained voltages (14 to 17 kvdc), but without evident dielectric breakdown (so far...). Amazing what can be built with some bits and pieces from TSC Hardware, Staples Office Supplies, and Home Depot. If constructing a N2 TEA Laser, make sure to wear hearing protection, wear plastic UV-blocking goggles (test by confirming beam is *fully* blocked if you shine it thru them onto paper - you should see no beam spot), and use insulated rod or screwdriver with one hand only, to make minor electrode rail position adjustments if necessary. Voltages of > 10,000 dc, with 50ma of current can stop your heart dead, if you get current flow up one arm and down another. If you are not familiar with working with high-voltage circuits, read up on safety protocols first, before switching anything on, and then follow the rules. I set the exposure to emulate ISO 800 filmspeed on my Huawei cellphone to capture this image. The beamspot has a cleaner focus with better geometry, but it is still not spherical, indicating the beam is dispersing. At present, no optical focusing is being used.

Guide to this Site

Close-up of N2 TEA Laser in operation. You can see the beam spot on a white envelope at top centre of image. Rail geometry is: left rail=inverted angle-edge, right rail=double "L" pieces, with insulated, weighed steel-rod holding them in place on capacitor foil.

Decided I should put a top-level explanation here. This was originally just my Xerion research log, and some notes about some APL & GNUplot apps I ported to Android.  But I started keeping daily notes (the Xerion neural-network stuff and the other AI research is oriented to financial markets, as I need to trade to pay the bills.  So I started keeping notes here.  And then it got political, as I saw political activity destabilizing most attempts at financial market analysis - AI or just plain old hum-int.. Since my site is self-financing, and not subject to censorship (like Google and Facebook are), I decided to just try to tell the truth.  We should be entering a great golden age of freedom, fairness and prosperity.  But instead, we seem to go rapidly wrong in many areas.   This disturbs me, so I write about it.  Our equity markets - local & global - are now hostage to the foolish awfulness of our modern politics.  Sometimes, it looks more like the 1930's than the emerging 2020's.  And we are being primed for more conflict, it appears.  Wise (and maybe even angry & unwise) voices should try to stop this wrong directional shift.  Daily notes follow... (Lately, I've been building a home-brew TEA Laser - a diversion from markets & politics.  These were invented in Canada, and work by lasing the nitrogen in the air. )

[Feb. 17, 2019] - Pure techy stuff:  I needed Adobe on a CentOS Linux box (7.4), and found the trick to putting Acroread 9.5.5 (Adobe Reader), on the box at two sites:

1) Location of last Linux version:  "' as an rpm file.

2) Libraries (special for CentOS 7), needed to get it to work: ""  (from a Physics lab in Holland.  Works. Confirmed.  It took a lot of messing around, but I got Adobe Reader working on my CentOS 7.4 production box.

But I managed to wipe out GNOME desktop, and to fix it, I uninstalled stuff, and had to re-install with:

yum groupinstall "GNOME Desktop" "Graphical Administration Tools"

I kept getting a transaction error:

Transaction check error: file /boot/efi/EFI/redhat from install of fwupdate-efi-12-5.el7.x86_64 conflicts with file from package grub2-common-1:2.02-0.65.el7_4.2.noarch

due to the hacking to put Adobe "acroread" on the box, and resolved it with: 

yum upgrade grub2 firewalld

which got rid of another error, related to a program "fwupdate-efi" not being found.  But Gnome would still not start.  I had "su" to become root, and remove a "99-nvidia.conf" in the /etc/X11/xorg.conf.d directory, so I could recover the X-windows GUI environment.  (The machine does not even have nvidia, so unclear how the config file got there...).  I put these details here so anyone else who has this issue can find the solution. 

[Feb. 14, 2019] - World just keeps getting sillier.  The lack of wisdom that characterizes modern politics is really sad.  Our best people avoid politics now, because it is a sh*t-show of stupidity and deception that resembles collective madness, perhaps?  In Canada, instead of strong, wise leadership, we get fake scandals, drift and deception.  Our business climate is not bad, (thanks to the fine wisdom of our Victorian founders), but our parlimentary process looks to be wrong.  Of course our federal Attorney General should listen to the Prime Minister on serious legal issues that would determine if a major Canadian construction firm should face *criminal* prosecution.  She *should* be influenced by our PM.  What is wrong with people's thinking now?   But she has resigned from cabinet, and hired a lawyer (seriously!), which is plain idiotic.  So now, another fake, irrelevant manufactured-for-the-media "scandal" has been puffed-up by our hollow politicians, so they can get airplay.  Maybe they could focus on building that pipeline to get our Alberta oil to Asia instead, eh?  President Trump in the USA has a lot of faults - but I envy the Americans, as at least they have a strong leader who can take on difficult projects and actually get things done.

[Feb. 9, 2019] - The N2 TEA Laser is a curious device.  It really works.  I recall reading the SciAm article in 1974, when I was very young (I read everything I could). Lasers are mainstream tech now, but in the early 70's, they were still pretty new.  Rail-guns will make better weapons than lasers ever will, since a mirror is all the defence one needs against a laser attack.  I've often wondered if charged uranium or thorium isotopes or ions could be accelerated in the atmosphere to near light-speed, and used to "ablate" say, the cranium of your enemy, or his warship. (Neutron ablation is a problem in hot-reactors, and also in IEC fusion devices.  A photo-micrograph of the reaction-chamber wall, will look like swiss-cheese after your reactor has been running for awhile.  A proton has about 1800 times the mass of an electron, so a proton or neutron beam ought to pack a bit more of a punch than an electron beam.  But the air just stops and scatters it.  Or maybe a heavy isotope of something?  The Yanks use "spent" uranium a bullets in their high-speed gatling guns they put on their warships. These work good now that the interlocks prevent them from tracking targets thru 360 degrees, and shooting the tops off the shipboard antennas. If your a/c isn't cut to pieces by the storm of heavy-metal, your kids will probably die of radiation-related diseases 20 years after the war ends. <sigh...>  ).   Tesla had supposedly considered this (he was looking at charged mercury droplets, accelerated from inside a sphere on top of his "tesla coil".  This was all in the "Star Wars" research efforts under Ronald Reagan ("Ron Ray-gun" among the folks in the community).  Lots of money spent, but no actionable technology - at least none that reached the public.  It remains that a squad of well-supplied guys (or gals?) with M-16's and RPG's, are probably more lethal than a big boat full of lasers and rail-guns.  Accuracy still counts, and when everything is moving and fluid, the general "static-ness" of big, high-powered, high-tech stuff typically proves its undoing.

And here is another annoying tech problem.  People wonder where the "Aliens" are.  Well, they probably all whacked each other.  In order to interact in any way, we will need ships that can travel near light-speed.  But if you have a near-lightspeed vehicle, you can just fly it into any planet of folks that you don't like, and probably vapourize any Earth-sized orb.  A ship the size of a submarine, travelling at even some fraction of "c", will impart enough kinetic energy to obliterate a planet - and any citizens on that planet will have zero time to react, since you will be approaching at near the detection limits of any sort of radar or image-monitoring device.  It's a sobering thought.  Any "starship" of any kind, will be a perfect, undetectable, and un-stoppable, planet-killing machine.   Any civilization that can develop ships than can reach relativistic-speeds, offers extreme threat to all those around it.  It's the KE=1/2mv**2 relationship - the energy-transfer grows linearly with increases in mass, but at the square of the velocity.  If you fly a starship into a planet, you would either bore a hole right thru it, or more likely, blast it into small bits. Consider: A megaton of TNT (a million tons of TNT), is a popular way to characterize an atomic explosion. 1 megaton is equal to roughly 4.184 petajoules (peta => a million billion, or 1x10**15).  A joule is a measure of energy.  It is one Newton acting over a distance of 1 meter.  A Newton is the force needed to accelerate a mass of 1 kilogram at a rate of one second per second.  So the Joule is a unit of energy, and is measured in meter-seconds squared per second.  Our popular unit of power, the watt, is the work that one joule of energy per second provides.  If you push one ampere of electricty thru one ohm of resistance for one second, you have used 1 watt of power. 

Ok, so if we measure our starship's mass in kilograms, and our velocity in meters per second, then we get our kinetic energy in joules.  The Endeavour Space Shuttle weighed 78,000 kilograms without fuel, empty.  Lets assume we can fly up to 2/3 the speed of light, which is roughly 3 times 10**8 meters per second x 2/3 => so we have 2 x 10**8 meters/second.  So, the KE in joules of the impact of a 78,000 kilogram starship hitting a planet, would be 78,000 x 2 x 10**8 x 2 x 10**8.  The really crazy biggness of this number comes from squaring the 2/3rds speed of light.  We have to multiply (2x10**8) x (2x10**8), and we get a crazy big number that is equal to: 40000000000000000.  And then we multiply that by 78000 kg (the mass of our small starship).  This gives the KE (kinetic energy) in joules: 3.21 x 10**21 joules.  Now, remember that a one megaton atomic bomb, is about 4.184 petajoules (from Wikipedia..).  One "peta" is 10 to the 15th power. So our basic 1 million ton TNT A-bomb is 4.185 x 10**15 joules.  But smashing our small starship into a planet at 2/3rds of lightspeed, gives an energy of 3.21 x 10**21 joules.  So, divide it out: 3.21x10**21 / 4.185x10**15 = 745697.8967.  

So, the energy released by this action, would be greater than the same energy as seven hundred and forty five thousand quite big atomic bombs, all being set off at the very same time.  Foooosh.  Nothing left.  Remember, the Hiroshima bomb was much *less* than a megaton. It was around 15,000 TNT tones equivalent.  A megaton is 1 million tones of TNT, or roughly 67 times more powerful than the Hiroshima uranium bomb.  The biggest hydrogen bomb the Russians set off, was around 50 megatons.  Fusion-power is pretty awesome.  It blew a hole in the upper atmosphere, and contaminated the world for a while.  Yet it was nothing, compared to what a single lightspeed-bombship would do.

So, where are the Aliens?  If they had any conflicts at all, they will have simply used  starships to lightspeed-bomb each other's planets.  So there likely aren't any Aliens at all, even if there were some to start with, somewhere.  And if there really were some, they would be smart to keep *really quiet* about their existence, right?, since any rogue pilot with a lightspeed ship could obliterate their homeworlds.

 [Feb. 7, 2019] - Ok folks - up on my soap-box:...

Science is good. Religion is bad.  And Politics is tragic.  I watched the full "USA State of the Union" Trump-speech, and I felt like I was back in Byzantium, watching Theodora's husband at the theatre.  And it was pure theatre - Nero would have loved it.  But I had a bit of Sartrean nausea watching the amazing rhetorical brilliance of the whole thing.  People who respect the law, and like to eat sausage naturally feel sick watching either being made.  National politics is just office-politics on a grand and scary scale.  Trans-national politics is too often just gangster-games, economic fraud, or bombing.  We need to stay focused on reason & rational action, and distrust theatre.

One thing for sure: We Terrans had better pursue the "Elon Musk Doctrine", and move our political-social focus outward, off-planet, or we risk collapse into collective violence and mass-warfare.  We desperately need low-cost mass-energy conversion devices, and we need them soon, before we start killing each other en-masse.  Planetary population has gone from 2 billion to 7.5 billion in my lifetime so far, and this growth-rate will not continue.  This is mandated by the laws of mathematics and diminishing marginal returns.  Population will either S-curve, or spike-retrace, like market-prices do.

Really, if we can build fleets of nuclear-powered submarines, we can build fleets of nuclear-powered spaceships, but not if we are fighting with each other on half the planet.  We need to get the witless politicians, pretty-boy smooth-talkers, and brutal lie-monkeys off the political stage, and recognize that our "nation-states" are dangerous social-political-economic constructs - much more dangerous than our technology.  We need to *limit* government activity, and remove restrictions on technology, instead of constantly doing the opposite.

For example, I am certain we can "tickle" palladium-saturated deuterium or deuterium ions in a low-density vacuum environment, to fuse directly into helium and tritium, and release some serious energy in the process. (So was Nobel prize-winning Russian physics genius, Kapitsa - read his 1976 Nobel acceptance speech...)  But there is some trick to it that we don't know yet.  I've built a high-vacuum fusion apparatus (the IEC Fusor), which has a small neutron-output signature (proving hot fusion is occuring), but it is not an economic device - the power input needed is 1200 watts (from my Molectron Power supply), but the net additional power output is probably no more than 1 nano-watt, not enough to be useful.  But it is a *gain* (the device gets hot, as do the resistors...), so anyone running an IEC Fusor is way, way ahead of anything that ITER or the Big Science guys are doing, at least from an engineering-economic viewpoint. Sure, fusion power is the "Holy Grail", but we just need to look up on a sunny day, to see the truth of what is possible.

My electrcity to the farmhouse comes from a heavy-water fission power reactor, where uranium atoms are giving up their lives to energize our local power grid.  Works pretty good.  But we must move our technology along, and reach the point where PNR's (Personal Nuclear Reactors) are available, effective technology, so we can live both here, and in space and off-planet.  Mars and the Moon are very cold, and the moons of Saturn and Jupiter are really cold.  It was minus 25 C *here* a few days ago.  At my place, if we lose our wireline link to the big reactor on Lake Huron, we risk freezing even here, unless we can cut forest wood and burn it in the wood stoves at a rate that is sufficient to prevent the house cooling down below freezing-point (a non-trivial task that requires a lot of chemistry, technology and effort - gasoline, diesel-fuel, chain-saws, a tractor, and a hydraulic wood-spliter powered by a small Briggs & Straton gasoline engine, as well as the human effort to cut apart the trees, dry the wood, and fire the stoves). 

So we must push the technology now, and push it hard.  And we now *must* focus outward.  "Climate Change" is irrelevant, as folks on this planet face a brutal, painful population crash regardless of whatever fuel they burn.  But markets survive spike-retrace events because of underlying growth and a viable economic-technology mix.  We need to keep that magic mix happening.  We need to dump the superstitious foolishness of religion and godism, and recognize that Science is our only choice now, and that we have to try *a lot* of different ideas and approaches.  And we need a flexible, free-market-driven economic environment, so that viable, successful technologies can reach rapid commerical acceptance, they way our AC power-grid model did.

And we need to see that the political people are very dangerous, and their tub-thumping theatre can be as tragic as it is grotesque.  But just because most people are weak and stupid and easily misled, does not mean we all have to be.

Do science like your life depends on it people, because it does.  And make sure your children study, stay in school, study science, learn math (at least some..a little goes a long, long way), and become and remain fluent in technology.  Leave the stupid god-books on the shelf, or in the museums, where the failed, dead things are.

We had a market burp today, but the path probably remains upwards, to levels that will amaze and distress.  Stops will be gunned, but the DJIA need only grow for a few years like China's economy, and we will see 40,000, and pundits will blanch, choke and sputter.  USA, despite it's crazy wildness (or its wild craziness?), remains in the drivers seat.  And they know science, and how to connect it to the economy, better than anywhere in the world.

[Feb. 6, 2019] - Re-designed N2 TEA laser, with larger capacitor, using two-layers of acetate dielectric.  A small 35x32 cm capacitor worked well (also double layer), but was repeatedly damaged by over-voltage dielectric breakdown (like the first very large capacitor was).  Beam spot on white paper is quite bright, with close-up image of the beamspot shown above.  Note that you can see the diffraction lines (they are very clear when viewed in reality, but become blurred by the cellphone camera).

A note here on construction: If you build a home-made TEA laser, when you fire it up the first time, it will almost certainly fail to lase.  It is critical that the electrode-rail separation be adjusted when the laser is on.  I use a highly-insulated screwdriver, and make light taps on the beam-rail electrodes, to get a narrow gap roughly 1.2 to 1.7 mm.  You have to fuss with the rail-gap, and also with the spark-gap distance (I seem to need roughly 4 mm gap.  If the gap is too narrow, or if the capacitor has too much inductance, you will get no laser action, even if you see lots of arcing between the electrode rails.  The rail-to-rail arcs are *not* responsible for the laser action.  Also, rail-geometry is critical, as one of the rails has to be raised up off the capacitor to allow pre-ionization to occur.  Holding the insulated handle of a long screwdriver, I have to tap, tap, tap gently on each end of the right side rail assembly (two nested "L" rails made from aluminum, weighted down by a plastic pipe with a cylindrical piece of steel inside the pipe).  When the correct inter-electrode gap is achieved by tapping and moving slightly the rail assembly bit by bit, the beam-spot will suddenly appear on the paper.  It is almost magical.  And although the above image does not show it well, there is a distinct pattern of vertical diffraction lines in the beam-spot, which I find quite curious.  An artifact of the edge of the rail geometry perhaps?

[Feb. 5, 2019] - Markets tracking as per forecast (to my surprise - I am always surprised when things work - Murphy's Law is the only true rule...).  Polar vortex (-23 C temps) replaced by +12C temps - quite the swing, approx. 35C in a few days.  I remain a student of early technology, electrical, computer, particle-beam and traditional weapons, nuclear power devices, etc.  The early stuff was highly tractable, operationally verifiable, and most importantly, repairable in the field.  Image at the right side is a highly sensitive H. Tinsley Co. Ltd. galvanometer, which could detect microvolts of current.  Henry Tinsley founded the company in 1904, in the UK.  The wooden-boxed device (wooden box in image) is a 3184D "Potentiometer" which (along with my surge-protector for the wi-max internet link), I recently repaired.  Same for the "Electrohome" quad-stereo receiver, and the Black-Asus Intel-based uni-processor (runs CentOS 6.6 Linux, as it is a 32-bit box, and it really works well.) 

Why my attraction with the old technology?  Because when it breaks, one can fix it.  That means it can run forever, at least if parts remain either available or can be fabricated or scavenged.  If you can't open your "black-box", understand exactly how it works, and fix it, then one day, it will probably kill you, and perhaps all those around you.  My 3184D is interesting, as like the big Electrohome receiver, it was manufactured in Canada.  This Tinsley device, which I acquired at an antique sale, was made in St. Jerome, Province of Quebec.  Like my "church steeple" Webley revolver, I had to buy it and restore it, because it just looked so cool.  Cool   Will be putting together modified version of TEA laser, and now hope to build a carbon-dioxide laser.  The operational characteristics of the N2 TEA laser is quite amazing.  Why does lasing occur orthogonal to the electrode rails?  I made a much smaller capacitor, and it lases almost as well, with a bright blue beam-spot on plain white paper.  Electrode shape seems to be quite critical.  I will publish my specs.  And I need more resistors (I have melted a few...).  I want to try a custom-made power-supply, using a computer-monitor flyback transformer, or perhaps a transformer scavenged from a micro-wave oven.  Perhaps I will even make a hologram rose.

[Jan. 31, 2019] - Polar vortex weather... was minus 25 degrees C this AM.  Vehicles have to have their block-heaters plugged, in order to start.  Internet has been offline for several days.  I finally repaired it myself this afternoon.  Mars weather, and Mars technical profile - ie. you are on your own, make sure you can use a soldering iron, have lots of spare parts, and know you way around both the hardware and the software.  Mkts are tracking as expected, and we remain all in and long.  These are curious times we live in now.

Moved TEA Laser images to "TEA Laser Details" section. 

[Jan. 25, 2019] - We're having Martian weather here.  It was minus 23 C a few nights ago. (Well, Mars is more like -70 C, so not quite..).  Built a tiny-perfect version of the TEA laser, with much smaller capacitor, and it still worked ok. Speed of light limits beamline electrodes to roughly 46 cm, given what I assume my capacitor dielectric factor is.  Building another version tonite.  Market is tracking ok, so I remain in "watch but don't trade" mode.  Folks are calling for the SEC to investigate the Dec. 24th low-volume meltdown, but no where is anyone saying "gunning for stops", which is pretty clearly what the algo's did. 

[Jan. 22, 2019] - Experimenting with the N2 TEA laser, and managed to blow up the capacitor (see image at right - over-voltage dielectric failure.  Oops!).  This laser is fascinating, and raises many questions - such as why the strange diffraction pattern on the beam-spot, and why does the laser produce a much brighter beamspot forward, rather than in the aft direction?  Probably the beamspot diffraction pattern is an artifact of the 337.1 nm wavelength light passing out from the space between the beamline electrodes. But it is quite pronounced (I am using no optical focusing).   I must rebuild the device, with enhanced-layer dielectric (two layers of desktop acetate plastic), so I can try higher voltages without destroying the capacitor.  How many layers, before capacitor inductance becomes too high, and there is no laser action?  And how do I make a dye laser?  And how can I put the whole thing in a vaccum tube, and then still adjust the rails?  And how about using CO2? (carbon dioxide).  Will I need to water-cool the thing then?  Hmmm.  Cool

[Jan. 21, 2019] - Happy Martin Luther King Day!  US Mkts closed, so no action.  Missed the lunar eclipse last nite.  Too tired from a couple of all-nighters.  But some nice results.  Building a super-radiant N2 laser in the basement has been educational.  Nitrogen actually *absorbs* UV light pretty effectively, so it takes some finesse (which my Webster's defines as: "To bring about by adroit maneuvering" for verb-intransitive), to actually get N2 to lase, and generate a beamline.  But it does, and quite well, in what is termed a "super-radiant" manner.  But it all depends on timing (like much in life, as well as physics). 

The excited state for the N2 molecule is only about 2 to 3 nano-seconds, so to pump them all up *quickly*, is critical.  The capacitor has to discharge *now*, where the duration of the now-event is less than 2 nanoseconds.  That means the spark-gap (which is just a fast switch), has to switch quickly, like "BAP!", and not "BAAAAZZZZZZP!", which is the problem I was having.  I was using two pointy screw-nails pointed at each other.  This was fine for a tesla coil, but really bad for a TEA laser.  The photo at right shows the new, two-spheroid spark-gap, which is set to 4.3 mm (as measured by micrometer). 

It really is quite exciting (for the person, as well as for the nitrogen molecules..), when you get it all configured, turn on the voltage, and you get a bright beam-spot.  I spent at least two days fiddling with different capacitor sizes, rail types and rail geometry, and voltage levels, before I determined the problem was my (very) crappy spark-gap device.

Of course, I want now to replace the (amazingly loud and bright) spark-gap switch, with a big thyratron tube.  I found a 5948A thyratron on Kijiji (from a military radar array), but it is burned out, apparently.  I need to be able to switch at least 15 KVDC at no more than 1.5 nanoseconds, and this turns out to be non-trivial requirement.  There are some semi-conductor devices (thyristors) that can respond this fast, and to this much voltage, but I am unsure of how and where to acquire these.  Plus I also need a trigger-pulse generator, which can operate quickly.   So for now, it's the spark-gap, or nothing.   I want to put the whole assembly in a glass tube, and try the laser with different pressures and different gases. (I have a gaseous diffusion pump, and a mechanical vacuum pump.)  Carbon dioxide apparently works well, and lets one stimulate up a good strong pulse.   Of course, my objective is to build a multi-megawatt device that can do interesting work.  Cool

[Jan. 20, 2019] - TEA laser works, but if I run up to 20,000 volts DC, and move the spark-gap distance too far back, the dielectric gets punched thru at the spark-gap connection.  But I found I could just cut a piece of acetate, a couple inches square, and put it over the hole, and continue to use the capacitor & the laser.  Here is a video of the TEA laser in operation, that I made earlier this evening:

[Jan. 19, 2019] - Update: TEA Laser works incredibly well now, to my surprise!  Wild The key was to build a spark-gap device that had a spherical electrode on one side, and a flat piece of aluminum metal on the other.  This lets the spark-gap switch close in 1 nano-second, which is required to drive the discharge circuit, and hence for the laser-action to take place.  Population inversion now occurs, and the laser beam spot now gets brighter, as I increase the voltage.  As per professor Csele's notes, I found the spacing on the spark-gap was critical.  At a close spacing, (maybe 1/16th inch), I got no laser action.  But increasing the gap to 3/16ths, and I got perfect, consistant laser action, and a bright beam spot.  And the beam-spot has that strange optical property that coherent laser-light has, where it appears bright yet diffuse, made up of light-dots (and this happens, despite the beam being UV, and only visible in the photo-responsive action of the highlighter ink.). 

[Jan. 19, 2019] - The first TEA Laser variant was invented in Canada, in the late 1960's, at the Defense Dept. Research Centre, in Valcartier Quebec, by Jacques Beaulieu.  It was kept top-secret, (the Cold War was on, the DEW line was in operation, we had been doing "duck and cover" drills at our school, etc...)  so details were not published until 1970.  Here is the Wikipedia stub: 

A modern version has been made at Niagara College Laser lab by Professor Mark Csele, with details here:

And last night (morning?), around 5:00 am, I managed to get the electronics to work, and I got some consistant beam action, which was dramatic.   I must have tried 20 different electrode rail configurations, and at least as many capacitor sizes.  The electronics of the trigger circuit is *very* critical, as a discharge event that is only 1 or 2 nano-seconds long, has to occur, in order to "pump" the energy into the nitrogen molecules, so they will lase as they fall back down to ground state.  My beam was sloppy and intermitent, but on a piece of 8'' x 11'' paper covered with highlighter ink, quite bright.  (The UV beam is invisible, at 337.1 nanometers wavelength).  The beam-shape was about 1 inch wide, but only 1/4 inch high, and very bright on the highlighter-covered white paper.

The electrode rails were inverted aluminum angle-V, and a double set of smaller aluminum rails, set in an L-shape.  Inter-electrode rail gap was set using a paper-clip that I had micrometer-measured to 1.5 mm.  I used a hand-wound copper induction coil to connect the plates of the capacitor, and had put the spark-gap right at the edge of the capacitor plate (made using a bolt and a twisted bent piece of aluminum foil.).  The electrode rails are roughly 46 cm in length.  Curiously, the voltage wanted to be in the 7 to 8 kv DC range - higher voltage levels did not work.  And a resistor placed across the rails did not work.  In my larger setup (the dielectric is big - maybe 3 feet long, by 2 feet wide), the capacitor plates seems to need to be coil-connected, to create a resonant circuit?  I need to learn more about "transmission lines", and the details of the capacitor operation at discharge. 

The tolerances for an N2 TEA laser are extremely tight, and any variation results in a circuit that creates an impressive amount of noise and sparking, but no laser action at all.  (You get a lot of reflected UV light, which lights up the whole room, but no beam.  And the beam action is *independent* of the arcs between the rails.  It will occur *only* when all parameters are correct.  Any design variation, and the lasing action will not be evident.  In particular, the spark gap has to be tuned to the capacitor & electrode-rail configuration. This appeared to have been my problem, as I had tried several gap-types and locations. 

Also, you want the spark-gap to have a round-ball shape, not a little pointy shape, so as to get consistant switch action.  And further, the electrode shape is curiously critical - the physical geometry of the electrode-edge is a critical pre-condition that will determine whether lasing occurs. 

Electrons scatter to populate the capacitor plates at roughly light-speed, and how the discharge occurs across the electrodes when the spark-gap fires, determines whether lasing will take place. Typically, each end of the rails are slightly different distances - 1.4 versus 1.5 mm.  I ended up adjusting the rails while the device was operating, using an insulated handle screw-driver.  (Do not do this.  Really bad idea.  I am stubborn, and stupid, OK?  Do not do this.  Use a large piece of insulated plastic or adjust with voltage off, using set-screws or something.)  :)

Interesting thing about lasers... the math that describes the "population inversion" action can be expressed using probability-driven tensors, and is very similar to what one has to work with to understand neural-network AI technology. 

[Jan. 18, 2019] - Ok, query was, how did you know?  Why did you hold and stay long thru December meltdown?  (Edit: Of course I didn't know.  No-one can forecast future events.)

It came down to what they call a "Knights Fork" (I really hate chess, but I recall the term) - basically a situation is crafted or arises where either option (from a binary alternative set), results in a win for the agent driving the action.  What we had was a situation where either rates would rise, or they would not rise.  I am naive, and have less information and less resources, but I need to invest.  The economic imperative remains.  So the reasoning was: If rates rise, then it is due to a rising economy and rising economic growth, and history, economics and my own research suggests a rising rate of return on capital will drive rising business profits and hence rising share values & prices.  But if rates are *not* raised (or even lowered), then this has to give a boost to share prices, as it will drive us back towards the zero-rates regime, where bonds have ceased to offer economic returns for average, small-scale rational investors.  Either outcome motivates rising share prices.  So the wild downswing of December was an artifact of the algo-driven trading environment and end-of-year distortions, combined with the "Dr. Strangelove"-style politics wafting out of Washington (Weirdington?).  What is it they call Washington?  "Hollywood for Ugly People?".   But America was well-designed.  The Founders created a rational, self-organizing political process-model, which has stood the test of time.   And New York, for all it's amazing awfulness, still runs one of the better, more honest (most honest?) markets in the world.  

It's like Obama bowing to the Japanese Emperor.  It was a generous and honest action that was impressive.  (The USA nuked two of their cities, right?  I don't think a little bow to their old boss's son was out of order.  If you are an honest person - inside yourself as well as in the external world - you have to sometimes just bow to the wisdom and the quiet, gentle power of historical reality and basic truth.)   America, for all its crazy faults, has this awesome ability to self-correct, without having riots and tanks and military boys in the streets killing their own people (like almost every other nation has done).  One simply must give honest credit to the genius of the USA Founders, even if the curious wild doings of their political machinery makes a sane man cringe with surprise, shock and disbelief.  Blush

I bought a bunch of new parts for the TEA laser prototype, including 25 1-watt, 1-megaohm resistors, to make a current-limiting resistor array (5 x 5 matrix => a single 1-megaohm, 25 watt resistor.  So, I=E/R => 20,000 VDC/1meg = 0.02 => 20 milliamps.  My fridge-sized DC high voltage power supply can supply 50 ma., so I should be able to run this circuit with a spark gap (a short circuit) and not pin the current meter (and destroy both the power supply, and the capacitor dielectric). [We are up 357.59 on the DJIA, as I key this.  What a curious world.]

[Jan. 17, 2019] - I've tried several variants of the TEA laser design, but still no laser action.  I can create an absolutely blistering array of sparks on various electrode rail designs, but no beam-spots on the paper yet.  The tolerances are *much* higher than I realized.  The capacitor needs to discharge in a few nano-seconds.  The inter-electrode gap needs to be roughly (ha ha) 1.4 to 1.5 mm.  (I need a micrometer...), and the current needs to be controlled better.  (I am blasting holes thru the capacitor dielectric, having now gone to a thin, 4mil acetate sheet (a large piece of desktop plastic, sourced from Staples Office Supplies) to address the capacitor self-inductance problem.  Steiner's trick is to use an array of 1/2 watt resistors in a series/parallel config to get a 12.5 watt, 1 megaohm resistor, to limit current.  I have a Molectron DC power supply, that can supply a truly frightening amount of voltage and current.  It is seriously dangerous to work with, and has these interlocks, so both hands have to be touching the control panel, for the HV to be engaged.  Plus, I rewired it to reverse the polarity, so the hot lead is negative, not positive HV.  This makes it a truly lethal device - the hot side is *negative*, which means it flings out a surfit of electrons, looking to find a ground.  If you just lay the hot wire on a linoleum floor, you can get this interesting phenomenon where a big fat sloppy array of sparks spreads out across the floor, almost as if you spilled a pail of water on the floor.  Even after the supply is shut down, the room is full of static electricity, and you can walk around touching things an get little static-electric sparks.

I am working on this laser project, because it is critical that I do not trade any market action until a certain future date.  I can watch - but not touch. ("A man has to know his limitations...").   This laser project is non-trivial, it turns out.  Getting the beam to generate is much more difficult than I expected.  So far, I have just been creating a great light-show, making a lot of ozone (O3), and burning holes in plastic sheets.   The main problem seems to be the current.  I needed the high-current for the fusion reactor (hence the refrigerator-sized HV DC power supply), but the laser application seems to want a much lower current configuration, which operates in glow-discharge mode (the spark-gap fires before there is arcing on the electrode rails).  The state pumping of the N2 molecule happens *before* the arcing occurs.  One approach is to put the whole thing inside a tube filled with only nitrogen (arcing is reduced), and one fellow has documented how he did this.  (The tube needs to be semi-sealed, N2 gas supplied (welder's gas will do), with very thin glass to allow the beam to exit at the end.)   But that is in the future.   I will source & build the 1-megohm 12.5 watt resistor today, and see if I can at least replicate published results.  (Understand this: Science *requires* replication of published results.  This isn't just hacking.  Replication of results is one of the most key things that makes science real, in a world where most information is typically lies, opinion, and supporting narratives for fraud.)

I notice that Royal Bank in Canada has *lowered* their 5-year fixed mortgate rates.  This is what my analysis suggested.  Japan still has negative rates, and Germany and Japan are showed negative 2018 Q3 GDP growth.  We are in for a long period of very low rates, and not much inflation.  What we will get, is scarcity-push price delta's, ie. old fashioned shortages causing price increases.  China is being economically attacked by an dysfunctional US political administration, but that administration is being run by Israel, not Russia.  It is truly amazing how broken and twisted US politics has become.  Israel literally has America by the short-hairs, an astounding and bizarre situation.  Rule of law is being replaced by arbitrary arrest of foreign nationals, and the silly Yankees have dragged us into the same foolish game, and induced our unwise Gov't people to seriously damage our relationship with China, and endanger Canadian's living and working there.

It would be best now for America, if it's Gov't just got out of the way.  Canada is facing the same problems - our gov't is a problem now, not a solution.  It would be best for everyone, if we could just shut down our government for a while, also.   Government is mostly just a giant machine for harvesting taxes, limiting wealth creation, and doing economic and social damage.  It does the best it can, when it just goes away.  So the US political model, is again showing it's surprising strength and resilience - it manages to find the right path, like a heuristic algorithm, or perhaps like Adam Smith's "invisible hand".   And the markets are understanding this simple truth, and showing strength and resilience also.

[Jan. 16, 2019] - The first article on a DIY TEA laser appeared in Scientific American, back in 1974.  I was very young, but I recall reading it.  (Always read anything I found interesting...).  The modern canonical article for DIY TEA laser building, is the 2007 webpage from Nyle Steiner.  He provides detailed plans and images, and shows the quality of the beam that can be expected.  There is a good thesis article from 1971, by V. E. Merchant, from Simon Fraser University (an MA thesis in Physics, which has some helpful math and diagrams).   The TEA laser is an astonishing thing, as it uses just the nitrogen and carbon-dioxide in the atmosphere as the lasing medium. 

Background on Laser "Magic"

With a TEA laser, there is no need for a cylindrical crystal of "ruby", or a vaccum-pump, or a silicon fab (for diode lasers).   I toured the University of Waterloo's Laser Lab when I was in grade school, and the professor showed me a synthetic ruby laser rod about the size of a ball-point pen.  It was expensive.  I also saw, with my own eyes, a 3-d hologram, of a toy Mercer race car.  It was truly astonishing - you could see the underside, or stand up, and look down on the top.  The true 3-d image (lit by a blue-green argon laser) appeared to float in space three inches behind the illuminated photographic plate.  It showed me that science could create actual magic.

The image on a hologram plate is created by encoding variations on a photo-plate using light interference patterns, created by splitting a laser beam, and illuminating the image from different angles.  When you first looked at the laser-illuminated photo-plate, you did not see anything, as you eyes would focus on the surface of the plate.  You had to focus your eyes several inches behind the plate, and then this ghostly image of the little car, would pop into view, as your own brain's neuron-network re-processed the same encoding that your own optical equipment was sending. 

For a young child, this was a powerful and dramatic learning experience.  At first, I saw nothing on the plate in the darkend room - but as I relaxed and extended my visual focus, the image materialized immediately 3 or 4 inches *behind* the plate, and I saw it with amazing detail - and in true 3-d, not fake 3-d. (You could crouch down, and look the details of the *underside* of the little car.).  I realized lasers could create magic.  (And this was before CD's, DVD's or SDI beam-weapons were invented!).

The TEA laser uses a single rapid high-current discharge along the beam-line electrodes to pump the N2 and CO2 molecules up into high-energy states, and as they collapse to ground state, they emit photons, which are coherent in a narrow spectrum band centred at 337 nanometers wavelength (almost all the laser action seems to come from the N2 molecules).  The beam is UV (ultra-violet), so you cannot see it, but it will phosphores blue on a piece of while paper, and be even more bright, if you put highligher ink on the paper.  The fact that you can just "lase air", has always surprised me.  If you use CO2 gas (which is easily available), you can put the whole assembly inside a glass tube, and get even more power - up into the megawatt range, apparently, if you pulse the beamline electrodes correctly (and rapidly).  Who doesn't want to have a megawatt laser to play with? 

Note: Make sure to take basic precautions when working with this stuff, if you experiment.  Wear glasses to block the UV radiation (it will burn your skin, like bright sunlight does), and if using a spark-gap to trigger the discharge, wear hearing protection, as it is awful loud - and the rise-time is *really* short.  Human ears do not accommodate rapid-rise-time noise.  They just get damaged, and badly.  So wear the same David Clark-style "headphones" you wear at the shooting range, if running a spark-gap triggered device (ie. a tesla coil, or this capacitor-discharge triggered device).  If using high voltage, remember it can kill easily, if there are just a few mA of current.  Use all the lab safety protocols every single time you engage the power.  And never reach with two hands to adjust anything on a live circuit.  Even a small discharge up one arm and down the other, can stop your heart like a cheap watch is stopped by a hammer!

My initial attempt does not show significant laser action, as the dielectric of the capacitor is too thick ( I used a big piece of opaque plastic for a window, from Home Depot).  I need a thin mylar sheet, to reduce the inductance of the capacitor, and the inter-electrode gap needs to be small - typically 1.4 to 1.5 mm.  The trick seems to be to get the spark-gap to fire before you get the blistering array of sparks between the beamline electrodes.  It seems to be the corona glow-discharge that triggers the laser action - you want to vibrate the N2 molecules up into a higher energy state, and then let them collapse back down to ground-state and release photon's of the same wavelength - ie. 337.1 nanometers.  That then gives you the coherent beamline of light - which you do *not* get from the UV splatter that occurs when a plasma-discharge spark occurs between the electrodes.

Nyle Steiner's Oct. 2007 article on how he built a desktop TEA laser:

V. E. Merchant's 1971 thesis, on early N2 and CO2 TEA laser design, operation, & measurement:


[Jan. 15, 2019] - Picture at the right is version 0.1 of my TEA Laser, which is running on high voltage A/C (which according to some researchers, should work => 120 pulses/sec.).  I am getting a sloppy, poorly focused beam, but there is evidence of lasing action taking place. Stainless plates I am using are not perfectly straight, and this is a problem.  Also, capacitor diaelectric is out-of-spec thick, and I have to modify the inductance on the plate-connecting coils.  A second spark gap between the two coils seems to improving things.  A piece of paper with u/v sensitive ink (from a highlighter) illuminates in the centre, but the beam is just a smudge.  New metal today.  Also, I have a high-voltage D/C supply, which I want to try. For laser-makers who want more detail, here is the 2015 Hussain & Imran July 2015 article providing information and specifications:  (Note: You can download the .pdf, as they made it accessable. Nice work guys.  Thanx.)

[Jan. 10, 2019] - Jeffrey Vinik, a hedge fund manager who has been in the business for a good long time, and knows a thing or two (and also averaged a 17% rate of return on his fund, from 1996 to 2013, when he closed it) has indicated he is positive on stocks, and suggests we are in the middle of a bull market, not at the end of one.  He is also advocating a return to an old-fashioned stock-picking approach, based on fundamental analysis combined with standard hedge fund long/short positioning.  Curious, as my work with AI stuff has led me to pretty much the same conclusion. 

The AI stuff works ok, until it does not work.  Very little works over the long haul.  Strategies come in and out of fashion, and the current AI-driven algorithmic auto-trading stuff that has worked well for a while, may get gamed (and if fact, it looks like it has been already, given the way a lot of pros got whacked in 2018).  My research (and others, as well), suggests that most of the stuff that works now for the algo traders is really just front-running - ie. trading in front of client positions.  That is illegal, but difficult to prove, and it can always make money.   It is obvious that folks far from the centre of the action, and without a Bloomberg or Reuters terminal for accurate real-time quotes, are going to just loose in any intra-day trading game.   The platform I use is awful now, and has several seconds of latency, which means I may as well be on the dark side of the moon.  I am not complaining, this is just a simple fact.   It is impossible to get price improvement or even at-the-market type prices anymore, since I am routed thru a "cloud" product, run by what is basically a major computer service-bureau (it's initials are: I. B. M.  Ever heard of this outfit? Sick ) .  The more it changes - the more it stays the same, I have learned.  (I'm sorry, but I just laugh out loud now, at the stuff being sold to the kids, or written up in Wired.  Should we get a Mercer? or a Stutz..?  Hmmm...  the colour and shape of our phones and TV's may change, but the humans using them have not changed at all since ancient times..)

So I think Vinik is right.  The only thing that has worked over the time-frame of "decades", is the approach Mr. Vinik suggests.  Most of the academic studies on the markets are worthless, because of the way the datasets are selected.  Survivorship bias explains virtually all they find.  Most investments tank, most stocks go to zero, and most people lie - and all political types lie with great style and profound skill - the Leftists even more than the Rightists.  Truth is a diamond, buried in the City of Dung, where most folks find they have to live.  I love Vinik's comments on weed stocks.   He is probably right on those too. All it will take is change in law, and they will all go up in smoke, right?  {#smileys123.tonqueout}

[Jan. 9, 2019] - The WSJ carries a good summary called "Ghosn in Wonderland", which details the disgusting political assault against the former Chairman of Nissan, Carlos Ghosn.  The Japanese Government, and a group of Nissan insiders, have decided to use what appear to be abusive, illegal methods to try and force Ghosn into confessing that he committed some sort of criminal act, for which there is no evidence that he did.  His pay was pretty skinny by CEO standards, and there is evidence that the Japanese Government is behind this astonishing action which has seen Ghosn arrested and held in jail, without access to legal process.  The whole thing looks like a yakuza exercise (the yakuza are Japanese gangsters and political operatives).  The Nissan "CEO", Hiroto Saikawa, refuses to speak about what is happening, and looks to be a nasty little piece of work - an active member in the inside group that have organized this corporate 'hit'.  There is evidence that Ghosn was planning to replace Saikawa, and that this attack on Ghosn was co-ordinated by a small group of insiders who deeply resented Ghosn, and the success he had achieved.

If you find this bogus process disgusting and unacceptable, then vote with your actions, and avoid any Nissan product.  Don't buy their cars, and urge others to walk away from this company and it's products.  Honda and Toyota make similar vehicles, if you must buy Japanese.  But look at Ford and VW products also.  Don't reward gangsters and gangster actions.  The Japanese "prosecutors" are attempting to extract a "confession" using methods that would be illegal in the UK, USA or Canada.  Apparently, this looks like an Asian thing, with both China and Japan now using this abusive strategy to drive political process.  

Check out Ford vehicles.  We run a Ford F-150, and it is just crazy well-made, and is more reliable than the Murano.  The world can live without Nissan.

There is evidence that the Japanese Government wants to break up the Renault-Nissan alliance, and it looks like Ghosn would have made this difficult.  By removing him, a political objective can be achieved.   But by using arrest and interrogation in what should have been a boardroom decision, Japan looks worse than China. 

This can only benefit American and European auto makers.  We own a Murano, and it is a good vehicle - and I was attracted to Nissan after watching interviews with Ghosn.  But the Japanese Government has a long tradition of "force managing" Japanese auto companies - often badly.  The modern Nissan was formed as the result of being purchased by Prince, back in the 1960's.  The "merger" was mandated by the Japanese Government at the time.  Without Ghosn, Nissan is basically being run by a committee of gangsters.  Why get involved with that?   If you want a Murano-style vehicle, take look at the Volkswagon product.  The only bad thing Volkswagon ever did, was up-tune their cars, so the engines ran better.

One might also want to avoid travel to Japan - at least until they have a change of Government.  And given the *silence* from Macron, Renault and the French Government on the illegal, abusive treatment of their Chairman by the Japanese LDP Government, one might also want to avoid any French automotive product (but hey, that is already the case, yes?  Who outside of France, would buy a French-made car?  If you want European, you buy Italian or German, no?). 

One fact is made clear here:  Our legal procedures - though sometimes also seriously flawed as well - are better than "legal process" in Japan or China.  And that simple fact makes our markets and our investment opportunities better, also.   The Asians just do not quite understand English prepositions, it seems.  They need to learn the difference between "rule by law" and "rule *of* law".   Because we insist on "rule of law", we live better, have and can make more money, and enjoy a higher-standard of safety & security in our lives and our workplaces.    Our expectation now, is that Nissan is probably finished as an independent company, if Saikawa remains in charge.   And Japan just looks really bad for allowing this.  Almost as bad as Canada looks, for arresting Meng Wanzhou, CFO of Huawei.  Canada certainly needs a change of Government.  Perhaps Japan does also.

But the TSX was up 199.58 points today (1.37%) versus a 0.39% gain for the DJIA.  Oil is trading at 52 and change USD/bbl for WTI, so some sanity is returning to the markets. Oil is almost certain to move back into the $70/bbl range over the next year or two.  But it may swing back thru 40, just to take out all the longs first.  I am seeing this pattern a lot, now. Danger, Will Robinson!  But if you can run lean during the plasma-storms, and traffic in quality only, you can not only survive, you can even live long and prosper!  Big Grin

Oh, for the rad-hackers who might be following my blatherspew, check out the recent "Nature" article by Ms. Jennifer Shusterman, Hunter College, City University of New York, on Zirconium-88, a kinky isotope of element "Zr".  Zr-88 decays by electron-capture, which is kind of curious already - but it's apparent affinity for neutrons is about 85,000 times what was predicted (presumably by standard model and traditional cross-sections) in Ms. Shusterman's experiments.  ("Wow!  That cross-section is a big as a barn!")...  Wink  Physics is serious phun, no question.  We might be able to use this Zr-88 stuff to address the neutron ablation problem in higher-output reactors.  I have this silly hack-project which involves building a nitrogen TEA laser, as I just can't leave the physics-phiddling alone.  (I mean, who doesn't want a cheap laser that operates in the megawatt range, eh?  In your dreams?  Maybe...)  Problem with Zr-88, is it is a rad-active isotope, which means rad-protocols and other gov-shyte, which I am not prepared to deal with.  But good luck to Ms. Shusterman.  She found something interesting.  (As an outsider, I can't even *view* Nature Online, so all I have read is the abstract.  But there might be something there, maybe. )

[Jan. 7, 2019] - It was a dark and stormy night. (I am quoting Snoopy, who himself was not original..).  Just awful weather, freezing rain, heavy overcast, pressure dropping like a stone in water.  But hey, could be worse.  We have air, water, and a direct connection to a huge fission reactor over on Lake Huron, which gives us all the watts we can use.  GE has traded up to 8.75 or abouts.  Awsome.  I was supposed to take a position once it fell below 7, but I did not do it.  (My software is better than my wetware, sadly.)  "But we will get by, we will survive..."  (Grateful Dead.)   Funny, technology is magical now, but human conflict is certain to ramp up, I worry.   "I see a world that's tired and scared, from living on the edge too long..." (Blue Rodeo). 

I've been trying to determine the source of the profound unease I see in the world, and I think it relates to having almost perfect knowledge (or at least access to it, via the net), combined with a profound sense of complete powerlessness.   You can only watch the cars crash so many times, and the children get killed, and be completely unable to do anything about it, before you have to either *fully* switch off, or just start screaming in frustration.  

It is a psychological limit-case scenario, like in physics or math, where you encounter the edge-condition or limit.  For folks not familiar with limits:  It's like the way Jacob Bernoulli discovered "e" (the base of the natural logarithms), by taking "n" to the limit in continous compounding for determining compound interest (the formula is y = (1 + 1/n)**n.)   If you assume 100% interest, and dial "n" up from 1 (annual) to 2 (bi-annual) to 4 (quarterly) to 12 (monthly)  to 365 (daily interest) to 8760 (hourly compounding) to 525600 (compounding each minute), etc, the resulting value of y converges on "e". 

Play with it, using APL or Basic or something simple.  You start with 1$ at 100%, which is y=2.  If you compound after 6 months (twice), then (1+1/2)**2 = 1.5x1.5=2.25.  (Compounding more often, gets you more money. That's why the math of "interest" was interesting.)   If you compound each quarter (4 times a year), with 100% interest, then 1.25x1.25x1.25x1.25 = 2.44140625.  Monthly compounding: (1 + 1/12)**12 = 2.61303529.  Dial "n" up to 365 (=> daily interest), and you start getting close to continous compounding: (1 + 1/365)**365 = 2.714567482.)  The "e" value is where "n" goes to its absolute limit, which is infinity. It is roughly: 2.7182818, and is sometimes called the Euler number, after that famous mathematician.  Turns out to be pretty useful.

Our brains have limits.  We "hit the limit", once psychological stimulation reaches a certain level. What happens with living systems, is that the behaviour changes.  This trivial analysis suggest that perhaps we may have exhausted the benefits of mass internet access, and that "switching off" or "decoupling" might confer benefits to "users".  (Note how we use the terminology of drug-addiction to characterize customers of modern comm-systems).  

If there is  benefit to be gained by disconnecting from internet access (social media, broadcast media, youtube with it's intrusive adverts, news shows, talks shows, etc.), than we may be approaching a Gladwellian "tipping point" where all-the-time online access may be seen as unhealthy, and people may pay for being able to be *not* online.  We might actually want to create "avatars" that can interact with the world *without* us having to be present.  The AI's become active servants that are able to act as our "social interface agents" so that we do not have to deal with annoying, stress-creating human agents.

And we specifically may not want at all, much the IoT technology. Looking at some of the CES offerings, and one gets the sense that we might even want to pay to have our houses "swept" for IoT technology that is monitoring our actions, much like foreign diplomatic compounds are routinely swept for audio bugs and cameras.  We don't want to American-bag and Khashoggi-kill the salesmen that come to visit, but we might pay to have an active intrusion-detection AI-based gate-keeper service that keeps them from even putting a foot on the property.  (Like those signs in the USA: "Security by Smith & Wesson").  I envision a series of active AI's that could provide security services and offer active protection against the hostile, toxic intrusions and APT (advance persistant threat) technology that much of the internet seems to enable now.

The world has never been a very nice place, and it now appears the beneficial, altruistic age of the internet is over.  The bad people who are genius/experts in the effective exploitation of human weakness and related psychological flaws evident in living systems, seem to now have the upper hand.  Every website wants to drop payloads.

The whole open-source approach seems to have been degraded now.  It's like rotting meat - you keep finding worms.  There seems to be a worm in everything now.  Almost nothing is pure, or honest about what it is actually providing.  Huawei is criticized for it's router beaconing, but *everything* seems to be "calling home".  Every piece of software seems to be full of little white wiggling processes that telegraph back to head office, what the user is doing.  This is not what folks want, and it is why I am skeptical of new code and new devices.

We may soon hit a "limit", I suspect, where folks recognize that they are being "managed", and that most of the "benefit vectors" flow only one way now.  If there is greater value to be gained by "decoupling" from modern communications technology than by being an active participant, this will change things, as economics remains real.  Technology that assists and enhances true privacy, may become more valuable than the Apple iPhone approach that so delights the children now.   I seriously suspect that business and political people will soon be paying *premium* prices for cell-phone technology that has *reduced* functionality, and enhanced privacy, for example.  Didn't we almost have this?   What you may want is a cellphone that has just-enough technology to make and receive calls - but no other features that can be hacked, or used to track, exploit, monitor or entrap you.   It also means that "IoT" technology may be an unattractive investment, as it may be seen as dangerously intrusive.

If the internet is eventually viewed a nasty, intrusive, dishonest and dangerous place, then you may want to reduce it's access, limit it's reach, and pay money to prevent it being used to monitor your actions, degrade your security, and threaten your safety.   The future of technology is maybe not so bright anymore, since it seems to carry more risk than benefit.  The bad guys might be winning this one.  But perhaps there are some actions "Little Brother" can take.  And there looks to be an opening for a new class of products with a different focus, which offer real, tangible benefit to users. 

[Jan. 4, 2019] - Must refactor this blog, it grows too long, yes?  Manic market action, DJIA up 607 points as I sip the morning coffee.   Oh my, what a world, where the weather mirrors the markets, or is it the other way around?  Money is being made today, and as the Festival of Saturn is now past, the sun is returning to the northern lands.  Makes an old Roman happy to see it and feel it's warmth.   Each day grows longer now, the darkness and cold retreating ever so slowly.  We remain all-in and long, as our theory and our observations both suggest rising real rates plus rising rate of return on capital should occur together.  It has been a rough ride with old Santa Claus (the Satan clause?), and we are up another 50 point on the DJIA in the time it has taken me to key this.  Perhaps this is the turn?

A Little Note on Growth

The December jobs number in the US just blew the doors off. Expectation was for 184,000 jobs added, but the actual comes in at 312,000.  That is a good number.  The unemployment rate ticks up to 3.9%, but in the old days, that was considered the "full-employment" number.  Combine this with Powell's comments in Atlanta about the Fed being sensitive to financial market conditions (keyword is: liquidity), and we have a requirement for a recovery in discounted prices.  As long as the Fed does not "unwind their balance sheet" (ie. dump financial assets and drain liquidity from the monetary system), then worried folks can breathe more easy.  If you look at a chart of daily changes in the SP500 index, since Oct. 2018 to now, these daily 2% swings make the chart look like a heart patient going into fibrilation, compaired to the Apr. to Sept. period of calm.  Think: An old stationwagon, at high-speed, fishtailing out of control on the interstate.  A feedback-controlled critically-damped process will oscillate out of control, and break the machine.  And large, in-period delta's (> 30%), will take any feed-forward process into the zone of chaos, where in-period changes can become very, very large.  It's baked right into the math, it is not necessary to have bad guys or terrorists do anything evil.

In the "Time of the Engineers", (the 1960's), most folks understood this, and economists focused on "stabilization" as the primary role of government economic strategy.  But as the world has become overcrowded, and we became hooked on economic growth, and as economics became concerned with numerous social-welfare issues, the benefits of stability have been forgotten.  The world *requires* a high level of economic growth, to prevent mass starvation and conflict.  Technical change has allowed us to reach these growth goals, and this is good.  But we are dancing with demons now.  High-growth in a limited, confined space, will end badly.

Elon Musk is right.  If we are going to have high levels of growth, then we had better move our efforts beyond this model where everyone is living in a single mud-puddle.  Two immediate requirements present themselves:

1) We need to commit a percentage of global resources to Mars colonization, and recognize that it will be extremely dangerous, and many people may die.  It will be crazy-difficult, and costly.  But it must happen.  We *must* turn our focus outward.  Our world is becoming an unstable mono-culture, and our planet faces an economic and ecological "gambler's ruin" scenario, if we try to drive high-levels of growth, without some sort of outward-looking, external focus.  Population will increase geometrically, and economic growth which supports and sustains population will encounter limits.  If the growth curve of sustainable population drops below the biological-actual population curve, a big dieback must occur.  Malthus will not be cheated.  If the dieback provokes massive violent conflict, and all weapons are deployed, results may threaten the operational integrity of the planetary ecosystem.   The world may be entirely poisoned. 

The human species will benefit from trans-planetary diversity, as well as from the economic benefits of constructing an entire artificial ecosystem on another planet.  Interestingly, China has already demonstrated that it is possible to use a commerical economic model, to transform a significant portion of the real-estate of this planet from a subsistance farming economy, to a modern commerical economy in less than 40 years.  China has been building cities the size of Toronto, at the rate of 15 to 20, every year for the last 15 years.  This level of economic transformation shows what is possible, with focused effort, reasonable management, and rational action.  The profoundly difficult and complex technology required for Martian colonization, will generate a high-level of innovation and scientific development, which will be directly beneficial for everyone here, much as the US space program was helpful in jumpstarting the development of 4-bit and 8-bit microprocessor technology in California in the 1960's.

2) There is no economic rational for connecting high-speed trading robots to the market feeds.   We are already now running into "Dr. Strangelove"-style control problems, where rapid, nano-second automatic trading (and often, also front-running) is acting to destabilize the markets.  Why is this allowed?  Will we really accept "pilotless" passenger aircraft?   Why should we accept "pilotless" trading & investment?  I like, accept, and use AI technology - but I do not completely bet my life on it, as it is sometimes *radically* wrong.   We should think about restricting the deployment of AI technology, where it can damage or hurt a great many, in order to benefit only a very few.  We might start by preventing the connection of automatic trading robots to the electronic trade/settlement systems.  All we need to do is require a human "pilot" be in the trader's chair, and that the trades be keyed manually - using human "meat sticks" (fingers!), rather than algorithmic actions initiated purely by computer link.  This would put all market players on a level playing field, which is so very much not the case now.

We need to develop and use technology, but use it wisely and fairly, and not blindly.  And we must look outward, and not focus all our efforts on inward-directed transformations.   As a species, we risk a kind of "hikikomori", in which human bigotry and prejudgements overwhelm the dialog of public space, and violent conflict becomes the newest normal, if we refuse to leave the confines of our wet planet.  Already, supersition and it's evil twin, religion & religious-cruelty, are infecting many people on this world.  We need to look outward, and challenge more aggressively, the vaccuum of space - rather than the flaws in each other.   And the motivation for this is not altruism, but basic species survival. 

[Jan. 3, 2019] - Must get used to writing: 2019.  Ridiculous number, no?  Couple of months back, I met an old high-school friend who lost all his money, and is living in the basement of a relative's house.  Another is in an old-folks home, while yet another appears to have become a hikikomori (translated sometimes as a person choosing "acute social withdrawal"). 

We should not worry about trading results, if they do not kill us.  If you have a rational/sensible portfolio, and it is throwing off a dividend-stream that allows comfortable survival, then one can probably absorb a 20 to 25% drop from here, and still all be ok.  I see another market sneezing fit in the DJIA that takes us down 435 points before I have even made my morning coffee.  Oh my.   When your portfolio can give up 2 or 3 percent (the typical annual rate on long government bonds), during the time you take to walk your dogs in the morning, then we are having a little volatility issue, aren't we? 

Portfolio Insurance, anyone?   Where are Michael Milken & Ivan Boesky when we need them?  We need more humans in this market.  Why can't the US regulators see that the markets *need* speculators who can add their liquidity to *manage* prices?   This "robot" market is not good, and is starting to look like a critically-damped feedback machine.  (Not a good thing. Imagine an old station-wagon fish-tailing on a high-speed highway)  And there is nothing more idiotic in the history of the world, than to suggest that there exists a crime of "market manipulation". You can only buy and sell.  And you can only do illegal and immoral things, by corrupting the information flow. 

And what has the Federal Reserve been doing for 10 years, if not *aggressive* market manipulation?   So many (including *me too*), warned against the QE the way it was being done - enriching a small few by the bond-buying that drove interest rates below zero.  The unwinding of the central bank balance sheets (esp. the Fed's), carries the risk of plunging us into a 1930's-style downturn, if most people have their wealth tied up in the stock-market, and the stock-market comes off 20 to 25%.  And if those stock investors are margined, then a *multiplier* comes into effect, which can make the wealth-evaporation rate much worse.   Factor in also an uptick in loan-costs for mortgages, student-loans, and consumer debt, and you have an ugly cocktail of costs that may car-crash economic growth.

We here at GEMESYS, deep in the hinterland, are structured to use almost no margin, so a 25% downturn will not kill us.  And I suspect most major investments structures in Canada (and the Canadian banks, with certainty), are setup to be able to absorb a 25% across-the-board negative phase-jump in equity prices. (DJIA now down 573 as I write this. So the price drop is continuing into 2019).   The bank's risk management processes are designed (required by law actually) to be able to absorb 20 to 25% losses, and not impair capital.   But that Yen/Aussi-dollar mini "flash-crash" lastnite, which took the Yen to 78 yen to the Cdn dollar), was a little warning that a dangerous game is a-foot, as Mr. Holmes would have said.  

We are in for some interesting times, I fear.  DJIA now down 635.  This market reminds me of that old Bob Dylan song "... everybody must get stoned!", and I fear investors are going to get stoned by this market for a while.  We are now down almost 3% before 11:00 am, so yeah, like another old Dylan song:  The times, they are a changin'...  from QE to QT, it would appear.  No wonder Mr. Trump is bad-tempered & seeing red, lately.   He is being setup to be Herbert Hoover'ed.       [Remember, the old guru's secret to a long life: "Don't Die!"  In the world of investment, that turns out to be wise advice, that is not as glib as it first sounds!].  My wish for everyone for 2019.  Survive, eh?  Try to avoid ruin.

[Jan. 1, 2019] - Listening to Tchaikovsky's Nutcracker.  Really quite fun. It's the 1989 Berlin Symphony excerpts, a Delta Music  CD with Peter Wohlert conducting, a seasonal classic. The 12 minute "Divertessement" is cute, with: Chocolate / Coffee / Tea / Trepak / the Dance of the Toy Flutes, and finally, the Clown.  A musical metaphor for our time, really. Quite lovely, as it segways into the gorgeous "Waltz of the Flowers", and then the awesome "Pas de deux" - the "Dance for Two", possibly one of the most sadly beautiful pieces of music ever written.   What must it have been like, to be in St. Petersburg and see this ballet first performed?   Breathtaking, no?

Try to imagine the world of Europe in say, 1906?  There was this wonderful stability and security and peace.  A businessman or an artist could travel from Moscow to London by train, and pass comfortably and in safety thru all the nations of Europe. The British pound was the global reserve currency, and one could transact securely on the bourse's of the world, perhaps using Baring's Bank or Lloyds or any of the big German or even Russian banks.  Barings was founded in 1762, and survived until 1995, when it collapsed as a result of Nick Lesson's rogue trading.  (I love that term "rogue trader". As if one "lone gunman" can take down the world.  Complete nonsense, of course.  Barings failed because it became stupid, like every org that fails.  Stupidity kills, and it kills quickly, which is sad.)  

Look at the failure of Europe - not once, but twice in the 20th century?  It would have seemed quite beyond belief, to a citizen of Europe of 1906.  And it was in 1906, that Einstein wrote his first paper  (as a Swiss patent clerk!) on the Photo-Electric effect.  That paper was only published, because Max Planck said it needed to be.  The world was mostly safe, stable, happy and prosperous in 1906.  And look at that same world, just 10 years later?   A great victory for ignorance and stupidity, was it not?  We have to learn from this stuff, folks.

Of course, I had to play Cowboy Junkies excellent 1996 disk "Lay it Down" (Geffen Records) as it has "Common Disaster" as track two... "Won't you share, our common disaster?...", followed by the disturbingly brilliant "Lay it Down", track 3, a most haunting song of transition and change.  I got to meet Margo Timmins a few months back, after the Cowboy Junkies did their show at the River Run Centre in Guelph.  She has white hair now, but her voice is still hauntingly beautiful, and the show was magically wonderful.  Micheal Timmins writes the songs, and plays guitar, but he is a shy genius.   Their music is just about perfect, and it was the high point of my year to meet and chat with Margo.  Folks drove in from Michigan and Montreal, to see the almost sold-out show.  But there were so many folks with grey hair...

So, Happy New Year - which will be the Year of the Pig (well, "Boar" actually...).  I am reminded of "Niederhoffer's Assertion" supposedly said when he was running positions for George Soros: "It takes courage to be a pig."  And I believe it does.  But there are times in your life, where you just have to go "all in", aren't there?  Half-measures and caution must be abandoned, and vigourous, complete committment is the only approach that will do the job.  But those kind of actions can get you terminated - American-bagged, and carried out in pieces in suitcases, like Jamal Khashoggi was.   I sense we will see some very big changes in 2019.  Stability really is pure illusion, like a magician's trick.

I wish everyone "bonne chance", as we vector into this brave (grave?) new world.  But it is like track 7, on "Lay it Down" - I keep getting that "lonely, sinking feeling", as I look to the future.   Perhaps I am wrong.  (It has happened a few times...)  :)

That disk, "Lay it Down", might be the best music to come out of Canada in the 20th century.  I *never* grow tired of listening to it, which is the true test of any art.

[Dec. 31, 2018] - This last year will be remembered in history, as the year all the *politicians* simply failed.  They all get an F.  Nothing of substance, significance or value was accomplished by any political entity anywhere, it seems.  The failure of the European Union to be anything more than another way for political opportunists to invent new taxes, is particularly tragic. Poor Ms. May should just step back, and be greatful that she can engineer with ease, the "hard Brexit", as that appears to be the right course of action.  Putin of course, looks again like a wise statesman (the man is a real genius - but also a captive of history, like every Russian leader), but he has played a weak hand very well.

The purchase of Redhat Inc., by IBM is also genius - but for the Redhat shareholders.  The code-base of the Redhat Linux O/S has become a bloated horror-show (or what one analyst/reviewer calls: "The Tar-pit of Redhat Complexity".)  I have these 32-bit boxes that run flawlessly - for months on end, with older Linux versions, and even a CentOS 6.6 box, with a Firefox 60.2 on it, but my CentOS 7.4 boxes, despite (or because of?) upgrades to latest kernel versions, will hang regularly like old Windows Vista machines.  When I try to report the bug via ABRT, Yum or DNF or whatever piece of code is running, builds the report, and then hangs, trying to download something from a repository, but it will not tell me what it is looking for, so I can't fix it! 

Redhat Linux has reached the "bloat/fail" singularity, and basically sailed over the event-horizon.  All this crapware (Dracut, Plymouth, etc, etc. - named after towns and cities in Massachewzits - just bloats out the whole thing further) so it becomes progressively more difficult to diagnose basic application and O/S utility failures).  They are trying to make it look like Android and IOs (telephone crapware), with installable "apps", which is just awful.  Could someone tell Linus before we all die, just how awful this bloat has become?  A basic Redhat "ps ax" goes for 3 or 4 screens, before I even start Firefox or a single application.  It's a barf-pile of bloatware, and basic, critical useful stuff (like the SELinux graphical app to set/unset booleans to address SELinux problems) don't even exist.  All the stuff one needs, has to be found and installed.  The basic GNOME desktop is being made to be like an Android cellphone - dumbed down to the point of being useless, children's crapware.   So, for Redhat, this was a very good time to sell the whole mudbag to "I've Been Managed".  Brilliant bloody decision, and the current $175/shr USD valuation is damned impressive, considering the current state of the product.

Everyone should learn this please, for 2019:  Reliability of operation is everything, and is a billion times more important than features or any form of fancy, clever design. This really has to be understood.  This year has seen a lot of really stupid, terrible/awful stuff - like the "push the nose forward automatically" feature on the Boeing 737-Max (aka "auto-death dive") that killed an aircraft full of people in Indonesia, and the "climate change tax" on gasoline that the French government introduced, that enraged the people of France enough for them to trash their own capital city. 

My message for everyone as 2018 ends:  Lets stop-the-stupid, ok?  Please? Instead of high taxes, lets have low taxes.   Instead of "government", lets have "freedom".  Instead of nations-at-war, lets have companies-doing-business.  Instead of feature-laden, high-priced, junk products, lets have stuff that "just works".  Ok?

[Dec. 30, 2018] - Reading a bunch of financial reviews for the year, and market comments for 2019.  These guys give me their views on "sectors", and "recommended market weightings" - ie. "We are overweight financials and communications, and market weighted on ..." pretty much everything else.  What annoying puffery!   Tell me what specific investments you recommend, and why you make that recommendation.  Or what you would short.  Most financial writing is gunk, and most professionals in finance make their money by either capturing the spread, raising money for people and taking a cut, or getting money from people, investing it, and taking a cut.  That is pretty much all there is.  Everything else is noise.   Damn, what a world of weasels.  I've decided to try to write truth here - even personal stuff - because that is, in the end, the only thing that really matters.  Everyone ends up in the graveyard - or worse, maybe scattered across the landscape or vapourized.  Don't worry about life after death, (that is stupid), just try to live while you remain alive.  And try to live within the envelope of truth.  The nature of work, is to determine truth, and then act on that knowledge.  Both parts are difficult, of course.

Taking stock:  The Xerion model is too short-term (3 days ahead, point prediction).  It works, but not enough to trust with big money.  The intermediate momentum models work better, but can be blindsided by events.  This has been proven in the recent price jumps. 

The market meltdown was caused by a tsunami of awfulness:  Khashoggi's murder (proving the Middle-East is a continuing horror-show that has no chance of any non-military solution) / Macron's idiocy and his French climate tax (and the ugly Gillet Jaune Paris riot response) / Canada's foolish arrest of Huawei CFO Meng Wanzhou (and China's idiotic response of arresting two Canadians - an ex-diplomat and a businessman - as spies) / Japan's arrest of Nissan CEO and Director (and company saviour) Carlos Ghosn, for a financial reporting issue, when everyone sees it as a yakuza takeover inside Nissan by some nasty little bastards / Trumps China trade-war plans, plus his twitter-storm of other ineffectual threats, proving he has little operational authority / the UK "Brexit" which is being badly screwed up by PM May / German and Japanese Q3 GDP growth numbers that went negative, in what is probably the most financially accomodative interest-rate environment in human history.  In Canada, we can't even build urban trolley lines and subways, much less big stuff like pipe-lines thru mountains.  China is building whole cities of millions of people every year, and we can't even build a simple coastal gas-station.  It's pretty sad.

World investors look at the sad-awful newsflow, and they see this gathering shit-storm of stupidity that is now assaulting business events.  Macron looks like a complete fool, our Trudeau is an ineffectual embarrassment, Trump appears to be skating close to the edge of sanity, and Angela Merkel, the only smart person left in the room, has said "Ok, I'm done."  Investors are worried about the future now, more than ever before.

I can only applaud China and Japan for not starting a war over those stupid Senkaku islands, and ask why Russia (and clever Mr. Putin) do not simply sell those worthless northern islands back to Japan for a monster bag of Yen?   Use the gabazillion Yen to develop the entire Russian far-east, and let your people get *rich*, like is happening in China.  Otherwise, you will wake up one morning, and discover that China has quietly annexed an entire slice of Asian real-estate, and Russia's eastern border, now runs from Inner Mongolia to the Laptev Sea.  Folks living in New China North, could then vote: Join China and become very rich, or stay in Russia, and remain poor?  If China goes whole-hog and copies the American political model as well as it's economic model, China could easily get away with this sort of thing in the future.   The role of the State is to protect (not attack) private property.  People choose the State they want to belong to, based the quality of the legal and protective services it offers.  Terror States just create dead people and poor people and people who leave.  Politicians should recognize this obvious truth. 

[Dec. 29, 2018] - I can't believe 2018 is ending, with so little accomplished.   I feel like Kurt Vonnegut, coming unstuck in time.  My pretty partner asked me tonite if I remembered 40 years ago (she did), and I said yes, see that car in the photo - it was 40 years ago I had that pretty toy!  I would drive at nite at insane speeds - 100 mph was a nice cruising speed on dark, northern no-traffic roads. (Eventually, I had to get something with wings...)

What do kids do now?  When I lived, we still had "wild lands", where there was no one.  I liked those places.  I was flying in a C-172 once, up north, and took some friends up for a flip - and there were no landmarks, no highways, and no GPS.  We went north & played tourist & messed around, and then I had to go back, and find the bloody airport.  Bugger, but there were no bloody landmarks... I had to dead-reckon until I saw the field, and of course, I never mentioned that I was bloody lost up until I saw the airport and the military jets doing their circuits. 

Like the time I flew to the capital, and had to fit into the pattern at Uplands, between commerical jets, my PA-140 basically throttle-firewalled, in the dark, and me without a night endorsement at that time.   The whole airframe was rattling like a pickup truck on a gravel road...  and I was alone, determined to just go land at an airport that I owned like any other taxpayer... I was going to Ottawa to meet a girl I knew, and a weekend of fine times was planned, so the minor issue of "time to spare, go by air" was no problem.  This was before GPS, and you just calculated stuff with a circular slide rule, did your vectors, and flew on in the dark on VOR and cross checked with the compass.  I felt like St. Exupery.

What do people do now?  What do the kids do now?  They play synthetic video games, and they don't know the feeling of playing for real in the big world of dark, true stuff.  If they fuck up, they just re-start the game.  If we fucked up, we pasted ourselves into the frozen forest, and they found the wreckage in the spring.   Or maybe not.  I remember driving the Sea-to-Sky highway to Whistler, back in '79, in a 4-speed Camaro with a 350 V8 - you could let the clutch out in idle, and it would lurch along, (but not stall).  And training flights from Vancouver Intl, with rain so heavy and ceiling so low, that it was madness - but my instructor was a quite mad Australian, (seriously, this guy was crazy for flying) who would literally fly in any weather - as would I also.  He would say "Weather's rough, do you want to fly?", and I would say (since I had saved up $80 for another 1-hour lesson), "Well, yes, if we can!".  

Years later,  I remember taking this girl I got keen on, up in my little PA-161 with it's little tapered wings, and its crappy 2-axis wing-leveler auto-pilot, and we flew up to the bottom limit of block airspace..(10,000 ft) and she was impressed.  Damn but she was lovely - I still remember the crazy feeling - I had built this cottage on an island in a northern lake, and I banked the a/c down thru a scattered layer, to the lake below, dropping a wing like a fighter pilot in a war film, and I realized, this was probably the high-point of my entire life.  I had it all, and I was flying like my childhood dreams, with a pretty girl who I would have that night, and that it would never, ever be better than it was now, at this very moment.   I hope everyone at least once gets to experience that feeling.   It was a long time ago, and of course, nothing ever lasts.   But for me, flying and taking my friends up to experience the amazing high, was pure dream/magic.   I felt like Roy, in Blade Runner... "I've seen things... things you can't imagine..."   But it is all just tears in the rain now - and when I am gone, it will be swept away, and no one will even know anything special happened, unless I write this blather, that only data-harvesting robots are reading...  Like Lucky's speech in Godot...  But sometime in the far future, long after I am not here, someone doing research on this "time of craziness" might read this, and learn from it.  We were here.  We lived...

We were young, crazy, and it great.  Really.  It was bloody wonderful, and we never expected mean, crazy, selfish nutters to blow it all up and ruin things.  I have these films/videos of the early V2 launches, and remember the X-15 flights - the Dyna-Soar, the Gemini missions, Apollo, and the amazing "Valkyrie research aircraft", the XB-70.  (I have all these Youtube videos of the various XB-70 films that were made).. The golden age of aviation was in the early 1960's - the Valkyrie was *not* a bomber, it was a pure-research aircraft, that provided the flight data to build the Space Shuttle. 

In one of the videos, they have this "family day" where all the families with kids are there at the roll-out, and all the 10 year old kids have these brush cuts, and look exactly like I did.  Hilarious.  By 2018, we thought we would have Moon-bases, and some operational settlements on Mars.   

Trump is right to withdraw all US troops.  The solution to that region is probably going to be based on the ability to generate deuterium-based fusion on a large scale, and deploy it directly from orbital platforms.  <more sighs...>

[Dec. 28, 2018] - Year winds down.  The picture above I took back in October, of the Farm.  Looks like paradise some time of the year - not now, however.  Freezing fog, with snow expected tonite. (Scotland weather.. This is why the Scotish invented raw-wool sweaters and Scotch, I suspect...). 

Markets are dealing with the new-normal of QT, a fading-power USA, and the internet of butchered information. Info on the net is wrecked now. It is either: 1) outright misleading disinfo, or 2) obfuscated hype & mis-direction  3) cloaked in absoluted secrecy and unavailable 4) subject to legal restrictions and copyrights & behind costly paywalls  5) blocked by the local "filter bubble" of one's access methodology (you see only what can trigger behaviour your providers want to trigger, such as that reach for the credit card.. 6) degraded both by the political orientation of the sender and the receiver (you get a different picture on Fox or CNN, and one's own biased prejudgement degrades the signal further)  7) Internal technical/neural problems related to the inherent flaws in our own neural matrix - eg. anchoring to the last data point or most recent image and other NLP trickery, which is used effectively by virtually every info-provider everywhere.   So pretty much all information sets harvested from the internet now, are either wrong, bent or dangerously useless.  And this is serious, since markets only work right when there is reasonably accurate dataflow. 

I was supposed to take a big leveraged long position yesterday, but I have been blindsided so many times, that I elected not to pull the trigger.  I feel too much like Jamal Khashoggi must have, with that bag over his head, before MBS's agents killed him.  So I will say it here.  If there was ever a time to "buy the dip", this just might be it.  If my analysis is even half-way correct, then there are some crazy-attractive opportunities on offer.   But I am already fully long.  The risk is a pull-back to the 18,000 level on the DJIA.  That seems a real possibility, given the political situation in the USA.  The other risk is an earthquake scenario in California.  There have been many seismic events in the "Ring of Fire" this year, but none to speak of in North America. 

When volcanos explode in Indonesia, and Japan gets the big-shakes, then the tectonic plates are shifting.  And it can't just be one side that shifts, right?  We are about due for a large shake on the other side of the Pacific.  (As Warren Zevon said: "If California slides into the ocean, like the Mystics and statistics say it will, I predict this motel will be standing, until I pay my bill..."  Also, the great "Panic of 1907" was triggered by the San Francisico earthquake and subsequent fire. )   But if we lost California, it would create some nice, new ocean-front real-estate.   Trump can go back to being a developer, and a new Silicon Valley could be built in Nevada, perhaps.  We would still be ok, and the USA might get it's head out of it's backside for a little while, which would benefit the rest of the planet.

Remember, stability is an illusion  There is no such thing as "stability".  Violent, continous change has always been the norm, in every sphere.  And we can't even know what the real distribution of possible change-triggers looks like.  We must accept "wild randomness", and leave the "tame" randomness for the casino players.  All the risk "models" are not just sure to be wrong - they are dangerously misleading.  Look at the market prices.  Now, run your risk model for the market prices all being "zero", which is where they are, when the markets are not operational.  The generated number is your true "value at risk".   The trick is not to diversify *within* your portfolio, the trick is to diversify *between* portfolios of asset classes (One asset class is a big stash of weapons and ammo. This is actually, not a bad investment, really.  Cheap, useful, and can be marketed, even if financial markets have gone away.  The "preppers" might actually be shrewd investors...)   I have all these friends & associates who can describe in detail, how *everything* in their country broke, failed, went crazy, etc., and the only option was to leave (latest group are the Syrians. Talking to any of them is a real education.)  But I worked and got the stories from folks from: Pakistan, Uganda, Ethiopia, Zimbabwe, China, Poland, Hungary, Czech Republic, East Germany, Cuba - and other war zones - and they all watched *everything* - including the local money itself, collapse into a state of war and chaos, where everyone lost everything they had.  Events such as the Cultural Revolution in China by Mao, the invasion of Poland by the Nazi's, the Communist takeover in Ethiopia in 1974, and the expulsion of the Asians by Idi Amin in Uganda, are particularly awful events, matched closely by the idiocy of the George W. Bush Iraq war, and the rise of ISIS in Syria - recent events that have caused the deaths of hundreds of thousands.

The world is a shit-hole of violence, horror and cruel stupidity, and market participants who understand this, recognize the necessity of the Brexit, the unattractiveness of Greek debt, and the certainty of failure of political experiments in hyper-bureacracy like the Soviet Union and the European Union.  Yes, the EU is a better idea than the USSR, but big, complex, anti-market models - which the EU is turning into - are doomed to eventual failure.  Those "Borgon" type butch females from Denmark that spend their time inventing massive fines for US technology companies, make me want to just weep.  The EU is a cluster-fail with degraded currency now, and it relies entirely on punishing levels of taxation to operate.  Just blow it up, and everyone in Europe could pay less tax to the Statists. This wll happen, eventually.

The world's bright spot at the moment remains China, but their success is under assault by a variety of ugly, external factors.  The Israelis are determined to damage China, as they fear an alliance between Iran and China.  And that alliance is very natural and sensible.  The USA and the Trumpers are assisting this stupid, world-wrecking project, and this may well be their downfall.   The world would be a safer, more sane place, if Israeli militarism could be controlled and extinguished.  Israel could remain - but it would have to be run as an open, non-racist nation under UN control, not by a bunch of nuclear-armed murder-monkeys.  But this fix is still probably 40 or 50 years off.  Maybe 100 years off.  But this mindless stupidity of the current structure at the butt-end of the big Sea, cannot continue much longer.  It blew up Rome, and it will blow up us, I fear. Either the Jews will nuke the Arabs, or the Arabs will all die from a virus, or the Arabs will nuke themselves in Gaza and contaminate the entire region with radioactive dust.  Or maybe someone will weaponize an airborne flu virus, or a small-pox virus or something horrible-awful that will actually allow Gaza to respond more symetrically to the Israeli warplane attacks.  Firing cheapcrap rockets with fins made from traffic signs, is not going to change anything.  Either all of Gaza will be killed, or Gaza citizens will determine some way to respond to being in that giant concentration camp.  The Israeli's are right to be worried about their "status quo", as it simply cannot continue as it is, for too much longer.

As Bertolt Brecht had MacHeath sing in "The Three Penny Opera" - "The world's a shit, and that is all, there is to it."  

Our tasks as humans, is to try to improve on this sorry mess, during the tiny interval we have to live.  But non-executives like Obama waste their time lecturing people (the most stupid thing a person can do), and crumple right at the point were firm, dangerous action is most needed.  Hence, the virtue of having a meglomaniac as the boss.  I've concluded that the boss probably *should* be a meglomaniac.  But one of my contacts suggests that the Saudi's or the Russians have something really nasty on Trump, and that is how they are controlling him.  (I said: Like what?  My contact: Just speculating... maybe a verified video of him being blown by an 8-year old girl?  Or a 10-year old boy?  Something really awful.  Me: Hmmm, that might actually do it, given the bent moral matrix of Americans.  But it could be faked too.  And it would be illegal to show it, so it could not be distributed.  My contact: Yeah, but you could distribute the transcipt, and have witness types testify, etc...  Me: Hmmm... maybe... ).   I don't believe it at all - but then again, every human has human weakness.  Rumor has it that FBI director in the 1930's to the 1960's, J. Edgar Hoover, was a gay-guy, and that he would entrap targets and film gay-sex encounters they had, so as to control them, which is why no one would fire him.  He was USA's Beria, some suggest.  That seems pretty far-fetched also - probably bogus, like most crazy stories are.  But the guy was "untouchable", despite the dangerous, unlawful behaviour of many FBI agents.

The point is - almost no information can be trusted - even when you have three independent, confirming sources.  We have to navigate in this maelstrom of deception, and it grows more complex, ugly and violent every year.  What is to be done?

<Insert commerical here...>   <big sigh...>

A lot of stocks should be selling at close to double what they are at now, but that assumes that the future actually occurs, and that one's DCF models are not going to be nuked.  There are few decent investment opportunities anywhere, and the US markets (and UK and Canada) probably offer better future outcomes than anywhere else.  So, if the political systems can hold together here (that is the real bet that one must make), then any DCF model (discount of a future stream of cash earnings flow), should show a fat current valuation.  If our country survives, and does not destroy it's economy with French-style Leftist-politics, or Chinese-style Cultural-Revolution mass-murder-mayhem, then we will all do just fine, thank-you very much.  Our banks should be priced at almost twice what they are selling at now, if you assume they are not going to be killed by taxation-demons or EU-style "Borgons"  ("Resistance is Futile!").

This is why everyone who invests, becomes politically involved.  And things are getting weird.  Really weird.  I see a "President Pence" in the future.  But then I also dreamed about my dog waterskiing, so I have learned to apply a very high rate of discount to anything I imagine at night while sleeping...  {#smileys123.tonqueout}

[Dec. 27, 2018] - It sure looks like a liquidity problem, exacerbated by pro-cyclic management responses.  It's not even algos running on momentum.  And it's not tax loss selling.  It's maybe just liquidity - folks are out of cash.  Big guys are out of cash, and little guys are out of cash?  They have all their cash in the markets, and the markets are rocketing down into the toilet, so they are selling because everyone has some serious cash-requirements for the next few months.  This is coupled with the questionable actions at the Federal Reserve, which is acting to aggressively withdraw liquidity from the system in yet another pro-cyclic exercise. (These guys should go read the 1960's stuff on "stabilization" strategies...)  The Fed goosed it hard on the way up, and they are now goosing it hard on the way down.  Old Soros would call this a "reflexive" phenomenon - the market players and macro-managers re-enforce the action that is already underway, so you get an acceleration as the process drives forward thru time.  In economics, it's called (erroneously) a "multiplier", and in harmonic analysis, it is called being "critically damped" - viz. a car fish-tailing out of control, because of awful rear-suspension geometry, like the 1960's Corvettes, which were notorious for being really easy to wrap around phone-poles (which a guy in my neighbourhood did when I was a tiny tot, with his brand-new "Sting Ray"). 

If you have a fission reactor to play with, you can upshift the U238 interactions, and the heat output, by slowing down the neutrons with your damping rods by pushing them in, or by pulling them out, and increasing neutron velocity.  Rods in, you raise up the density of the neutron flux & interactions, as they move slower so the cloud is more dense, or rods out, you can speed up the neutrons and get bigger momentum hits on the uranium nucleus, and increase heat output and reactivity nicely, but non-linearly, in a very dangerous way.  (Each U238 nucleus hit, splits, and flings out more neutrons => "chain reaction").  But pull the dampers out too far, and your reaction can very quickly get away from you, and you get a hot melt-down (cf. Chernobyl or Fukushima).    Fission reactors run on a knife-edge balance of stability - kind of like markets do.  Stability+control is the first objective, heat is secondary.

The point here is, we don't pay enough attention to the benefits associated with maintaining dynamic equilibriums, thru the use of active management methods... like driving in the winter with your windows open a bit, and the heater on full, so the car interior has a nice fresh-air summer-like feel to it... !  Cool (Or having your big BWR bubbling away in a nice, controlled manner, with steam gently spinning the turbines to light up the cities..)   The world is basically a wild-random bit-storm of interactions, and our job is to craft tech which lets us manage the maelstrom so that it throws off wealth and power.  Why can't people just see this simple, clear fact?   Maybe we might have only Homer Simpson in the control room, but at least lets have *someone* sitting there...  Ok?

I remain non-levered long, and it was starting to hurt in the AM and early PM.  If we are dropping into a Keynesian "liquidity trap", then we are all going to be hurtin' bad for the next year or two.  My analysis and models says no, but I always have to consider that I am maybe just plain old wrong.  But then the day ends with a honking big 260 point uptick on the DJIA, and I am up over 4300 C-D's on the portfolio, when I was expecting another down day.  Aye-yah.., as my Chinese friends say.

[Dec. 26, 2018] - Oh my, V2.  DJIA  +1086.25, or 4.98%.  Gracious.  Weasels ripped my flesh, algos ripped my portfolio.  This is getting just a bit silly...   I'm on the highway of regret, and the winds of change, are blowin' wild and free.   But we feel the love... and remain long, & without leverage - the models are saying this junior jump-up is not finished.  We might run now.

[Dec. 25, 2018] - Seasons Greetings to all.  I made my own Christmas Cards this year ...  :D

[Dec. 24, 2018] - Oh my.  DJIA down 653.17, or almost 3% (in one day!).  This makes this the worst Christmas for the Dow Jones Industrial Average ever (in terms of points lost before Christmas).  This is an artifact of the 20,000+ level, of course.  The 1987 meltdown was much worse, percentage wise.  That 1-day 508 point loss was a real eye-opener - as was the waterslide of 1929. 

When I was in University Economics, I was curiously the only one interested in the events of 1929.  In the library, I found all the old, original 1929 to 1931 weekly Time magazines, and I read the business section of every one, just to address my curiousity.  The only one who was interested, was a visiting Prof. from a "red-brick" school in England.  He was a really smart guy, and suggested I write up what I discovered - but I was too busy doing stupid-work that one needed to get done, to pass the courses.    What is funny is how amazingly useful that quicky research effort was. 

I am always amazed when my "worst-case" scenarios slowly play out.  It's like a disaster-film in slo-mo.  From a technical standpoint, the market absolutely should have staged a rally today, so to see this slow ski-slope of losses, suggests a *major* (major major? - remember Major Major, in Catch-22?) re-pricing is underway.  Sure, there is tax-loss selling, but there is no buying pressure coming in.  Anyone who has "bought the dip" here, has been kicked to the curb.

I think I will create an annual "Golden Xenomorph Award" for the fellow (or female) who trashes the hopes and dreams for the Future most effectively during the year - sorta like a "Darwin Award" for the Political Big Shot that does the greatest economic damage for the year.     So who should we give the Award to this year?  M. Macron of France?   Ms. May of the UK? (that has a nice ring to it...), or Dearest Donald of the Excited States?  Or maybe Mr. Powell of the Fed?  Or one of our fine Asian friends?  (Mr. Modi of India, with his astonishing "demonitization" strategy certainly should be in the running...).   Send in your cards and letters, and I will announce the prize in the New Year!   Happy Christmas - time to go out and do my Christmas shopping.  We should spend & celebrate now, since it looks like 2019 will be emerging from the events of this year like a little xeno-guy from the chest of ... (oh just stop it, says my inner-editor... ) Big Grin

[Dec. 23, 2018] - I've characterized US President Trump as an unstable meglomaniac.  Perhaps this is really an unfair criticism.  The media is doing everything it can, to make him look bad.  But he is getting results, unlike the leaders in Canada, the UK, and France, for example.

I must give President Trump credit where credit is due.  China has (in English parlimentary terms) essentially "tabled a bill" which will make it unlawfull to force non-Chinese companies to transfer their technology and intellectual property to their Chinese partners, as a pre-condition for doing business in China.  This is a very big deal. Foreign (ie. non-Chinese owned) firms operating in China have been forced *every single time* to surrender their trade secrets, engineering specs, and patents to get any business foothold in China.  This is one-sided and very unfair.  So Trump has scored a win here, and world business relations will be better for it.   This is what China understands.  Trump has also delayed the tarrifs on Chinese products until March 2019 now, with the implied hope that a more fair, balanced, and equal relationship can be established between US and China. 

And Trump will dump Defense Secretary Mattis as of January 1st, not the end of February.  That makes a *lot* of sense.  There is nothing worse than having some unhappy fellow stay on board when he has already "peed in the soup" (cf. "War of the Roses").   An immediate clean break is almost always a better idea.   Trump wins on this one too.   He is showing executive leadership, when most politicians first consult the opinion polls to determine what they should do.  Or they just bugger off and avoid the problem (like our Mr. Trudeau does), and attempt to spin-manage the bad news (while our people rot in windowless Chinese jail cells, without access to lawyers or any mechanism of legal process.)

Meng Wanzhou is out on bail.  She will have a fair chance to fight extradition to the USA, and the kangaroo-court justice-system of Rod Rosenstein, who does "trial by press conference".  This is just bigotry.  Rosenstein (and the US Justice Department), look like the old Soviets of the Cold War era.  He and the "US Justice Department" are an embarrassment to the USA.   At least Trump is doing his job, being an executive, and getting some results.    The US "government shutdown" is a complete piece of theatre, (nothing important closes) and it looks like Trump might win this one too.  The man is acting like a boss has to act, and demanding results from the flaks, flunkys and obstructionist opportunists (like Chuck Schumer, for example) that are keeping him from doing what obviously needs to be done.

Thanks to the subtle corruption of the US "Democratic Party", and the idiotic US rules that support and encourage illegal immigration, the US has pretty much lost control of its southern border.   Some sort of solution will have to be created, and at least some sort of formal border control (ie. a wall, fence or boundry), will have to be built.   Recent legal rulings by politicized "Democrat" US judges - which mandate the return of deported illegals - shows just how broken and out of control the entire US immigration process has now become.  

Trump's "Mexican wall" idea seemed at first an overkill idea - but with the recent rulings by the politicized US courts, suddenly the border wall is starting to look like a sad, but necessary response to the problem of out-of-control influx of illegals.

There is also a rumor that President Trump is looking at making staffing changes at the US Federal Reserve.   This might a be a good idea.   The Fed is out of step with Europe and Japan.  It seems very odd that Japan would maintain *negative* short term interest rates, *zero* long term rates, and aggressive QE, while the US is looking at 2.25 to 3.00 % long rates, and an aggressive reversal of QE (QT?  Quantitative Trauma? )  

The Q3 GDP growth numbers for both Germany and Japan were *negative*.  With global economies already turning down, international trade under direct threat of big tarrifs, and rule-of-law being suspended in China (and now Japan as well, with the nasty, lawless witch-hunt that is underway against Carlos Ghosn), it does not seem to be the time to be hiking up rates.  Powell looks like he just plain got it backwards-wrong re. the December rate-rise.  It's like hitting the brakes in a high-speed curve - you are going to spin-out badly.  Just doing nothing would have been the right course of action, given the already sagging GDP numbers. 

Folks I talk to in the USA are actually quite nervous about their jobs, and almost everyone is now carrying way too much debt.  The (almost) inversion of the yield curve suggests we are looking at a serious 2019 Q1 and/or Q2 recession now, globally.  It was not necessary to throw sand into the gears here.  The machine is *already* slowing down.  

The Fed cannot see where we are now?  Perhaps they are misled by their big government salaries.  Everyone in Washington (and Ottawa also!) is very isolated from real economics.  They stay rich and happy in up or down cycles.  For the Government boys, the work is light, the pay is big, there is lots of stay-at-home time, cash is always on-hand, and there is a great fat pension after a fews years of timeserving.  

And if you shut the entire place down, no one really gives a flying f%#~Big Grin!  What a wonderful world, yes?  Happy Holidays! {#smileys123.tonqueout}  And don't fear the future.  No matter what we do, it will arrive quietly like a xenomorph on small, silent feet.  And with a fine long-tail for balance.  Cool

[Dec. 21, 2018]  -  The markets blew up again today, with the DJIA down over 400 points.  Financials and technology are being repriced downward at a disturbing rate.  We are now dropping close to 2% a day, and no bottom has yet been found.  This is a dramatic turn of events, entirely brought upon us by unwise political actions & questionable strategies, pursued by foolish leaders. We are all now in trouble.

From Confucius:


[ The superior man, when resting in safety, does not forget that danger may come. When in a state of security he does not forget the possibility of ruin. When all is orderly, he does not forget that disorder may come. Thus his person is not endangered, and his States and all their clans are preserved. ]

Everyone should study Confucius.  What astonishing wisdom.  In the Western World, we had so many wars.  Our systems have evolved, like living things do. We have achieved what we have, and we are where we are, because we tried every other awful thing, and discovered disaster each time.  So, by trial-and-error (or scientific experimentation), we created our democratic+commericial world.  Thucydides wrote of war, but Herodotus nailed the truth of it, saying simply: "In peace, sons bury their fathers. In war, fathers bury their sons."

The right thing is easy to understand:  己所不欲,勿施於人

[ What you do not want done to yourself, do not do to others.]

In researching my growing sense of disaster - I turned to some classics - and found suddenly the morning news!  What a bizarre sensation - like an archeologist looking for 2500 year old relics, and finding a working computer.   I just learned of Graham Allison, and his book: "The Thucydides Trap", from a google-hit on yesterday's South China Morning Post.   Xi Jinping was exactly correct, in his 2015 proposal to Obama.  China and USA need to craft a viable strategy for engagement as partners and peaceful brothers.

But the USA looks to be broken.  It does not even seem to have a mechanism anymore, to incorporate understanding in its operational process.   And we here, are connected so tightly to them, but unable to have any input or influence on their operations or conduct.   When I was a child in the 1960's, we had Bomarc nuclear-armed missiles, and we built the DEW-line to spot Russian incursion into our Arctic airspace. My mom's cousin had worked on U238 enrichment using the Elliot Lake motherload of ore, during the WW2 war.

We had 28 nuclear-armed Bomarc's at North Bay by 1962, and as a result, we slept soundly and securely in our beds as children.   There was no property confiscation here - except during the war, where the Japanese were made to suffer.  But there were no Red Brigades here.  There was no Cultural Revolution here.  The terrible lie of Communist Ideology failed to gain traction in Canada, despite the Communist Party being able to operate legally here.

In 2nd grade - the early 1960's - the dangerous Kennedy brothers (whose father was a financial gangster) almost got us into a war with Russia over Cuba, because of the missiles that Russia had given to Castro.  We would go do "duck and cover" drills in the gym, to be ready if war occured.   (I secretly prayed that we would be attacked, so the school would be destroyed, in all honesty).  I had a model of a Bomac, and I was proud of it, as I was of the Arrow fighter, which our foolish Tory governement had killed.  But the Bomarc's were great.  They were essentially surface-to-air high-speed cruise missiles armed with plutonium warheads, and it meant that even if USA was wiped out in a "Dr. Strangelove" style Red-Insanity type of attack, we could respond to protect our land, and reduce our enemies to radioactive ash.  That was a very good thing.  Like the ancient Romans, we had known war, and so prepared for it to ensure peace.  This kept us at peace, a peace we still have.  We should all try our best to maintain this peace.

The Bomac launch-site is a tourist attraction now. It is basically ancient history, like Thermopylae in Greece, where Leonidas and his Spartans held off the Persians.

So where are we now?   Look at Rod Rosenstein, the person who runs the  "Justice Department" for the USA.   Inspect carefully his clan identity, and learn what is driving America now.   We cannot even speak of the process that is infecting us all in North-Am, without being called terrible names, and risk being accused of crimes.  Yet these Israel-supporters are absolutely driving the process, and determining the scope, direction and reach of American foreign policy now.  It is getting quite unbelieveable - and dangerous.  Iran is not an enemy of USA, but Trump and his supporters are determined to demonize both Iran and China.   The USA has been sucked deeply into the ugly fraud of mid-eastern politics, and America has fractured itself as a result.  There is no other way to explain the hostile and dishonest politics of Washington now.  The environment there is toxic.

Rosenstein, as US Assistant Attorney-General, is ramping up the attack on China now, as it appears Israel sees China as a great threat.   Any computer-hackers in China are automatically deemed agents of the Chinese government.  This is idiotic, but the Americans fall for it, because Rosenstein is the modern model of the perfect operative.   He has the power to assault and attack anyone he wants, and he is an aggressive supporter of Israel. And Israel fears and hates China, since it cannot bully it with it's disinformation assault and strident shouts of "anti-semitism", nor operate internally within the country, and dominate its media and judicial process.

These Israelis and their supporters are probably going to take us to global war.  Not immediately, but eventually.  There is no alternative path, I fear.   And with a dangerous meglomaniac like Trump in the White House, the clans that support Israel have the upper hand in the USA now, and also in Canada.   The attacks that are being directed against China all seem to have Israeli-supporters fingerprints on them, and this Rosenstein character is a dangerous and disturbing person.   Rosenstein was an Obama man, but has become a Trump supporter, and now appears to be driving the current anti-China actions of his office.  Watching this process play out, is tragic and disturbing.   Recent events confirm the toxic spiral of deception.

Why would Trump exit Syria in the way he has indicated America will soon do?  Defence Secretary Mattis had no choice but to resign, as the decision is simply insane.  But it will give Israel the opportunity to continue to advance it's policy of destabilization of the Arab world.  One has to give the Israel folks credit for effectiveness.  They have programmed things well. And to have this Rosenstein fellow running the "Justice Department" in the USA, means Israeli policy can operate directly through the American legal system - and even reach out into Canada, and smack-down our own legal procedures and even our sovereignty itself.    It is simply astonishing and grotesque.   Whatever you think about China, the arrest of Meng Wanzhou at the airport in Vancouver is completely, absolutely unacceptable to everyone, not just Canadians. Our government made a serious mistake, on several levels.  This action will destroy any chance to reach an improved trade deal with China.

This idiotic and illegal action might not just be the result of incompetance and stupidity on the part of our government.  Perhaps there are other forces at work, that are orchestrating this hostage-taking?   Who benefits from the assault on Canada's sovereignty?  Everyone is damaged by it.  A major Chinese company is attacked, an honest law-abiding person imprisoned, and US-China trade is damaged.  The financial markets are badly hurt (The US Federal Reserve interest rate setting was explicitly communicated beforehand. It is not the cause of the market meltdown.)   Canada is made to look like Putin's Russia, where politics trumps law.  Canada-China relations are damaged badly, and three Canadians are arrested in China in response to our bad behaviour.  But what about this Rod Rosenstein, and the so-called "US Justice Department"?  He is not damaged, is he?  

Rosenstein is holding press conferences, and invents more Chinese "technology demons" for the people of the USA to direct their hate towards.   Can a conclusion be reached here?  Is Rosenstein acting in America's interests - or for the interests of another entity?

And Rosenstein calls Trump "Lincolnesque".  This is just wrong. 

There is simply no reason for China and United States to be in conflict with each other.  This is just crazy, and people like this Rosenstein person are engaged in trying to inflame and destabilize American public opinion to create benefit for a specific group of people, to achieve a specifc political outcome.   The whole "Trial By Press Conference" that the Justice Department is using seems contrary to American law and basic legal principles.  But it benefits the current group that is now running Washington.

We should not be spooling up a war with China.  This is unwise, and these special groups - like the Israeli folks and their supporters and clan-members - should be made to understand that the fraud of this disinformation effort has been recognized.   Folks should understand where the real risks are.  We can see explicit, anti-Chinese racism now in the US operations.  And I see these race-oriented assaults coming directly now, from those in the US that support Israel.  Iran is just not a problem for the USA.   It is only a problem for the neo-fascist Likud militarists who currently run Israel.   It is important for folks to see what is going on.

We here in North-America, must re-think our unwise blanket support for Israel.  We must see what a risk it is putting people like Rod Rosenstein in positions of great political power.   It is not just the "Thucydides Trap" we are falling into.  We must recognize our position in the "Israel Trap" in which we are already ensnared, and which is attempting to pull us into another unwise conflict scenario.  We just don't need a war with Iran or with China.  War is just a bad idea, when it is not necessary.  It is not "anti-semitism" to contain and restrict Israel in the name of peaceful relations among nations.  We should recognize this truth.

[Dec. 20, 2018] - "We can live beside the Ocean, Leave the fire behind.  Swim out past the breakers, Watch the world die!"  - Everclear.  

An associate says I need to install Telegram on my Android tablet/phone so we can communicate securely. Ha ha.  Why bother?  Last nite, I read all the exfiltrated EU diplomatic cables that the three guys of "Area 1" released.  Like the Russian Ambassador in "Dr. Strangelove...", my source was the New York Times.  Chinese (they think) agents simply hacked a diplomatic site in Cyprus, and scooped all the EU low-security diplomatic chatter. 

The cables are actually quite interesting.  It surprises me that anyone even does any sort of analysis.   Oh my.  Another demarche.  That'll fix'em good.  (FFS. I've read a *lot* of formal chatyap.  After a while, I feel like the old Nazi who wanted to just reach for his Luger.)   Diplomacy only works when one has warheads and ballistic missles in one's back pocket.  Or the gunboats and battleships of 100 years ago.  The entire CTBT and the whole anti-nuclear movement is profoundly wrong and tragic.  It is only *because* of US MAD (mutually assured destruction) theory, that we in North-Am avoided nuclear war when I was a child.  Also reading "The Porcelain Thief", an excellent book about China and Taiwan - really well written. 

The stock market reflects not fear, but a genuine rational awareness that the world is now being run by a tragic cabal of self-serving unwise dipshits - there is no kind, gentle way to say this.  It's not "Rashomon" anymore. Now, its "Ran" - (Nihonian for "Chaos").   Or, for you English-types, "King Lear".   For the USA to withdraw all forces from Syria right now,  is madness.  Things have just been stabilized.   Like it or not, without active management of process, free elections will not occur.   A joint operation should have been established between the Russians and the Americans, and they should both remain to oversee re-construction and re-creation of some sort of baseline civil society.   But that will not happen now.   We even have an expression: "Snatching defeat, from the jaws of victory".  

The DJIA is down another 392, to 22,931.  Probably looking at that 18,000 we forecast last year...  Models still have no clear edge yet, too many variables, not enough data, and ceteris paribus assumptions are not holding anywhere, so chaotic, non-linear phase jumps are now the norm. 

Years back, I was trying to estimate Lyapunov exponents to get a sense of the chaotic dimensionality involved in market process.  Looked around 5 to 7 at least.  All my hacking/research/trading/messing-about seems to suggest just momemtum training is about all you can really do.  We are hard-core oversold now, and due for a pop, no question.  But everyday, we keep getting lambasted by bad events and terrible executive decisions.  

Japan seems to be "blowing up real good", sadly.  The idiotic palace coup at Nissan is showing how toxic business can be for even top bosses in Asia-land.  Carlos Ghosn's pay is not much by modern CEO standards, and his exit dollars were not decided, so arresting him looks 100% bogus - basically a put-up job by a bunch of yakuza is what it looks like.  Seen this movie before, too.  The Japanese prosecutors should be deeply ashamed of themselves, for getting involved in this ugly internal contest.  In Waterloo, they paid John Chen something like $300 million (over ten years, or some such thing), to come and save RIM.  The guy did *not* want the job, and they had to keep throwing money at him until he agreed to take it.  Most top jobs are nasty now.  You are set up as a target, and everyone starts taking shots.   In this world, if you are going to be a boss, it looks like you need to have your own armed security service - and that is where the trouble begins, right?  

It's why we need courts that are real, and not just kangaroo theatre.  Disputes are real, and both sides need to have a chance to tell their versions of the truth.  Renault put $5 billion into Nissan, and I suspect they are very unhappy about what has happened.  The auto business is in real trouble - cars are now far too expensive for most people to afford.  A new pickup truck is $70,000 in Canada.   No one who has a farm, has an extra $70K lying around.  The folks who buy trucks are rich Toronto lawyers, and federal politicians on the public payroll.  It is comical.   The "affordable" cars produced here and imported, are tiny crap-boxes, sold mostly to young women.   So the auto-business is tough, and it will get tougher.  Carlso Ghosn is probably the guy you need.   The Nissan "Board" should probably resign, if they want to save the company - again.   Damn, we are now down 421 on the Dow.  I am damn fortunate that I am a paranoid trader, and do not use margin, else I would have been forced to start unloading into this sad storm of the awful.

[Dec. 19, 2018] - The year winds down.  The BBC reports another Canadian has been arrested in China (that makes three of our citizens), and "migrants" continue to flood into the southern US, aided by political agents bent on destabilizing the shaky US administration.  An astonishing lack of wisdom seems to be infecting the world.  The Brexit is now certain to be a disaster, regardless of how it plays out.  If May gets her deal, the UK will have gained nothing excepting the requirement to contribute infinite payments to a bankrupt Europe, as it devalues it's currency to attempt to fund its untenable social-welfare schemes.  The Paris-protests are a prelude of what will have to come.  The "hard Brexit" is the only sane choice now, but disruption will occur, regardless.  The entire "Euroland" experiment is doomed, as they have dumped any attempt to maintain a sound currency, & the rules requiring reasonable restrictions on national deficit spending. 

US politics resembles a "Cold Civil-War".  A gov't shutdown was averted, by law-passing action.  Oh good.   Trump has declared ISIS defeated, and will withdraw all troops from Syria.  Russia won this round & American lost.  The Saudi's are laughing, as they know the low oil prices will bankrupt & destroy the marginal producers.  Canada's government, run by Pierre Trudeau's son, is in the process of commiting economic suicide by destroying our Pacific-Rim trading relations.  China is locating & assaulting Canadian citizens operating on the mainland, and arresting them.  The wreckage is piling up.  Our financial companies - which in Canada are a large part of our economy - are continuing to fall in value, as they see international opportunities evaporate, and a domestic slowdown loom large.

The Chinese have an expression: "When you have reached the top of the mountain, then any direction you choose, leads down."  That seems to be the current picture we are facing now.  

The momentum models have worked great - until they stopped working.   Interest rate increases should benefit the banks and other financials, but if a serious recession occurs, then the fall in home prices, and the retracement of the real-estate market in general, will cause the collateral base of the banks to fall.  My models say it is way overdone, and a 10% bounce in bank stock prices is overdue.  But the models might be wrong as they all have implied "ceteris paribus" assumptions that go out the window if global trade is badly damaged.   And this seems now to be the scenario that is now playing out.

We used to have a "War on Drugs" that was just plain stupid.  But now, we have a "War on Trade" that manages to actually be even more stupid.  I am just knocked-back, staggering at the mind-numbingly stupid, foolish behaviour that I am seeing at every level now.  It's as if leaders & executives really *want* to have a big war, and enhance the prospects for global meltdown by self-destructing whatever they are immediately responsible for.  Curiously awful - but also awfully curious.

(The "Ceteris Paribus Fairy" left me. See the link below and the slide at right. Huawei is a $100 billion US/yr revenue company - essentially a Chinese equivalent of Cisco+Apple.  They build good equipment, near as I can tell.  Arresting their CFO blew the "Ceteris Paribus Fairy" right out of the sky, sad to say.)

[Dec. 18, 2018] - Santa's rally looking weak.  GDP numbers looking soft for Q4.  Retails sales for pre-Christmas looking poor.   Recession will be driven by the new "War on Trade" that seems to be underway.  Attacking trade is really stupid.  We need new leaders.  Our PM in Canada has blown up our Pacific trade links and thrown our people under the bus, China is acting badly, the UK "Brexit" is a disaster, the French have trashed Paris, and the American gov't has gone into "full dysfunctional" mode.  Happy Christmas.  Have a drink.   Could be worse.  Could be raining.

[Dec. 17, 2018] - So awful tragic, it is becoming...  World is morphing into self-destruct mode.  Our "leaders" here in North-Am, appear to be inept or insane liars.  I mean, they are completely foolish and irresponsible to a degree that is shocking, obscene and unbelieveable. Our Prime Minister has managed to destroy a Canada-China relationship that has taken 40 years to create.  And at the same time, degrade our international reputation and impair Canada's sovereignty.   We in Canada, in the past, have operated at a higher standard than most nations, mainly because of our British history - and our small, quiet, internal wars.   We built something that works, so we don't fight each other with swords and bombs at the polling stations, come election time.  We have a working democracy.

It *must* be understood that the arrest & seizure of Meng Wanzhou, CFO of Huawei, in Vancouver on Dec. 1st, was illegal and wrong.  The entire exercise is political in nature, and should be clearly recognized as such.

And it turns out, the July meeting of the "Five Eyes" spy guys was all about "How can we destroy Huawei?".  Bunch of liars, it turns out, these spy-guys.  Complete scam-artists.  There is *zero* evidence that Huawei equipment contains Chinese hardware to allow data exfiltration. (This from the German intelligence community - it's not just my own opinion)  But there was this agreement between Obama and Xi Jinping, (President of China), to both agree to stop hacking each other's hardware & software for spying (this came because of the discovery of a datacentre in Beijing, where some bogus (exfiltrating) UDP packets were noticed "calling home", which both sides said they had not installed!)  But this was in *China*!  (The Chinese argued they were victims of this stuff.)   So the fear was born that Huawei and other China makers of 5G hardware (for mobile), *might* be insecure.  So, the Bloomberg service ran this story about kinky hardware found on some US server-board computers - from a Taiwan company that had it's fab in Guangdong on the mainland (which is where *everyone* now has their fabs) - but the story looks to be a manufactured piece of fraud - complete, absolute disinfo  (*all* sources were anonymous), and my research in Waterloo has found *zero* evidence of such hardware hacks. No one here has ever see any hardware-hacked system-boards.   These might exist - but it is unclear who is the planner.  Since *all* hardware is made in China, these hardware-hacked boards (if they even actually exist) could have been fab-ordered by anyone.  It is as likely to be the NSA/CIA as it is to be the PRC-army-group or the Directorate-7 guys in China.  Realize: Any machine that does *anything* on the internet, can be hacked.  Once a sufficiently high-level of complexity is reached, it is simply not possible to examine all possible operational features of all possible software+hardware interactions and combinations to ensure absolute security.

Yes, there are these PRC army-intelligence guys who have the entire China internet completely monitored and data-tracked.  If you use the internet for political purposes in China, or talk about anything controversial  ( like the mass-murder in 1989 in Tiananmen Square, for example), you get flagged, visited & maybe arrested.  But from here, we need to tell China that it must *not* mess with our business people, or you will just make a bad situation a lot worse.  You will damage public order, guys.  Release our people.  We were wrong to arrest Meng, but don't double the wrong, by being arrogant-stupid, Ok?   This whole thing is becoming like a "South Park" episode.

We *know* from Snowden's material, that *all* the internet here is completely "harvested".  And in Canada, the "Shamrock/Blarney" folks in Ottawa have been tapping all US phone calls, since the 1970's.  This would be illegal, except we do theirs, and they do ours.  It's like a data rape-gang.

Some Sandvine guys supposedly found some hacked code on some machine years back, but this story about kinky hardware attached to the boards to open a hardware "backdoor" to the o/s, looks like an invention.  The Intel (the chipmaker) computers have AMT, a built-in webserver which could be no-password accessed, and could exercise complete control over your machine. Its on most of the 64-bit machines made by Intel.   To test it, I enabled the AMT on my machines, and I could start (and stop!) them remotely from any ssh-enabled machine on my LAN - including mobile phones and tablets.  This is just the "Clipper Chip" of Bill Clinton's day.  It exists and is installed now to allow remote control of server-board machines.  Folks try to disable and stop it, but it is difficult to full disable it.  And it was discovered to have a no-password backdoor also. (The "Silent Bob is Silent" exploit).  

But this "China is monitoring our comm grid, and Huawei is part of it the guys doing it, so don't buy Huawei 5G hardware or routers !!" - looks to a planted story.  The CIA/NSA (and CSE in Ottawa who monitor all USA foreign/overseas telephone calls), are worried because Huawei looks to be honoring the Obama-Xi Jinping agreement - and not letting *any* monitoring hardware to be installed (and the rumour is - I have not seen the boards) but the rumor is that Cisco and all our makers of back-bone switching hardware *do* allow physical exfilatration equipment to be installed, to assist the "Five Eyes" datastream monitoring activity.  Read the Snowden material.  It's all there.

So my conclusion is that Huawei is being targeted with a "Big Lie" style campaign, as they offer non-Five-Eyes made hardware, and are the only ones NOT putting the taps on our backbone router devices!   Is this not just hilariously ironic?   

The German spy-guys are the ones who have also become deeply suspicious of the anglo-collective (the Five-Eyes spy-group), because the Germans actually *do* have quite a lot of evidence re. exactly how the CIA/NSA exfiltration of the TCP-IP/UDP internet backbone data is occuring. (Remember, their spy services now combine both East and West expertise and knowledge-bases)  There are folks in German Intelligence that have suggested that to be secure => you have to remove *all* computers from your secure areas, where sensitive stuff is discussed and documented - ie. you use typewriters, and manual, physical documents, because once an electronic version is created, it *will be* hoovered up with 100% certainty - if not by comm. spyware, then by monitoring the microwave signatures produced by the keyboard drivers in the machines, or the screen-display hardware, and such tricks.   There truly are no secrets anywhere now.  

We need to stop doing this stuff.  Everyone.  Especially our own government.  We need to take Vaclav Havel's advice in his famous article, and learn to live in truth.  Really.  Our leaders have locked the door to the flight-deck, and they are pointing the aircraft at the ground.  We have to get these folks to pull back on the wheel, and not destroy the world we have all worked so insanely hard to build, because seriously, that is what is at stake.  Nothing stays static.  We are manufacturing bigotry and hatred and this is idiotic.  It will grow and destroy us all.

We all make mistakes.  China did a really horrible thing at Tiananmen in 1989.  But we rounded up all Japanese in 1941, and put them in Concentration camps, and stole all their stuff - money, houses, cars & property.  And then we mined, refined and supplied the yellow-cake uranium for the atom bombs that nuked their cities.  It was total war.   And China and Japan - despite terrible hatred on each side - have managed to make peace work for both nations.  We have also worked hard to engineer peace.  But the project has to be ongoing.

We need to de-couple from American disinfo, and not be so amazingly stupid, and spout nonsense about "keeping the process apolitical".  I hear the stupid foolishness that some in our government spout, and I just want to weep.  The Tories can be clueless sometimes, but the Liberals are being just amazingly stupid here.  We have to fix this.

[Dec. 13, 2018] - Discovered something.  Significant result.  Looks like I will have to go dark also.  Ta.

[Dec. 12, 2018] - Reminded of an old quote, by an old king to his young son: "If you knew with what little wisdom the world was ruled..."   When I worked as a young lad for "Treasury",  I recall the cheat-sheets that we would prepare for the Minister, so he could appear wise, and respond to questions in the House during Question Period.  The little sheets had to fit in a palm, and not be visible to the Opposition benches.  Imagine boiling down *all* the economic data to a tiny squint-worthy document, smaller than a playing card... But at least both sides focused on actual numbers, and real events.  Now, it is all just media-politics, exploding balloons and the fireworks of fantasy.

We live now in a time of growing madness, governed by arrogant smiling children who believe their own fantasy-world is an objective reality, and scary-crazy old men who do much the very same thing.   Their lies launch past each other like rockets, but it is all just a circus show for the crowd.  This is theatre without any substance at all.

But perhaps we are just reverting to the historical mean.  Justinian's wife was a whore and a show girl, and they both loved the theatre more than anything, as did Nero.  The history of England are the plays of Shakespeare.  In Japan during Edo times, it was common to go see kabuki performances, and spend the entire day in one's box, eating, laughing, drinking and watching the play unfold.   Even our courts have decoupled from law, and offer theatre instead.

But shows come to an end, don't they?  And so do nations.  If we destroy world-trade, then the recent events in France are the accurate picture-of-the-future.  And regardless of what the American's do, our survival in Canada as an independent nation (or group of nations), requires that we get a new government at the national level very soon.  Our current federal rulers appear to have no idea what they are doing, nor how fragile things are at this time. 

In Canada, one grows weary of watching the federalists of Canada fly our future into the ground, all the while lying to us, saying how great everything is.  It's fine for the Ottawa actors, with their huge tax-funded salaries and massive pension schemes - but for many others, life is a grim struggle to make their month-end numbers.   Only the elite would suffer if we ended the entire fraud of Federalism, with it's "carbon tax", "income tax", "employment tax" and the mis-management of the national currency.  We would be better off if we just removed all parasitic fraudsters.  The theatre-show could just end, as all shows must.

[Dec. 11, 2018] -  The Paris riots show that the newest political idea might be a "Frexit", as Macron has managed to make the Euroland concept look like an instrument of economic assault.  US response? => Saudi-shoeshine-guy Trump gloats, as Paris burns.  And China, which makes all of America's products at a cost-base of maybe 30-cents on the dollar, is now demonized as the bad-guy for being too successful.  This is madness.  So the market is revising upwards the rate of discount it applies to future earnings, as the future vision of prosperity & peace, is being replaced by a vision of conflict and destruction.

The failure to sustain the price-bounce on the NY markets looks not to be an algorithmically-driven response this time.  The algo's are being over-ridden by sellers who need the cash, it looks like.   The hedge-funds perhaps are over-extended, and they are trying to reduce leverage too quickly?

I wonder how that "yellow vest" fits?  Democracy does not seem to work right anymore, and rule-of-law is being replaced with American instructions.  In Canada, our leader looks like a sock-puppet of the United States, and has decided to destroy our relationship with China by doing a "Khodorkovsky" on a female Chinese business executive at the Vancouver airport.   In USA, the Trump and his gangsters look to be puppets of Saudi Arabia, and that other warship-of-hate at the anus-end of the Mediterranean Sea.  It's more than a little awful.

[Dec. 10, 2018] - If shipping anything to GE or GE-controlled entities, make sure to get payment before you ship.  The American boys that run GE have only one card left to play.

[Dec. 09, 2018] - P3-model shows even greater oversold.  If market does not turn next week, (and turn *hard*), then we will have entered a new regime of madness.  We will have sailed-off-the-edge-of-the-map, and previous datasets will no longer have any predictive ability. (Yes, this happens.  Know it.)   

Studying Paris protests - now riots.  These events are maybe more significant than the 9/11 attacks on the USA.  World may change.   Macron is acting like an Emperor - Doofus-Maximus.  He may well have to resign, which is bad, because France *needed* reforms.  Might be the end of the Great Euroland Project - which we forecast would end, and end badly, since all it seems to offer is extra government, free immigration of the poor and criminal class, and much higher taxes, fees and costs for business operators.  A political strategy that hurts wealth-makers is unwise and unsustainable.  Examine China, which has gone wisely, exactly the other way.  

Angela Merkel, German Chancellor (she has Hitler's old job), flies commercial flights (not State Aircraft) and lives in an ordinary Berlin apartment.  Macron, a member of the French ENA leftist-hyper-elite, lives in the "Elysee Palace".  What kind of idiots have their leader live in a Palace?  Hmmm.  We know the answer, don't we?  He paid 200,000 Euros for new carpeting, and has a hair-dresser whom he pays 10,000 Euros/mo. using French tax money.  The price of petrol was over $7.00 per gallon, and his "Climate Change" strategy was to raise fuel prices by 23%, putting most farmers and small-business people into red-numbers on their income-statements.  But it will be the actions of Immanuel Macron that will be "red-lined" here.

France influences half of Canada, in a very big way.  Legally, half of Canada is French, and the strange awfulness of French politics has a deep influence on our local political scene.  We even have a quasi-Frenchman running the place here, who is more concerned with "aboriginals" (they used to be "Indians"), than with Anglo-Canadians.  He is probably a good man, and has shown fine political courage in the past.  But he is a poor leader, as he has shown he is unable to take decisive action when the need for such action is critical. 

We were supposed to build a pipeline to export Alberta oil, but the project has been blocked by B.C. eco-terrorists.  Trudeau has done nothing. Now, the Socialists who now run the oil-producing province have order production cuts.  (What?! So, we are OPEC members now? ) 

No. It turns out we are Putin's Russia.  We are now hauling business-executives off of aircraft, and throwing them into prison on bogus, "Trumped"-up fake criminal charges.  I wonder if Sabrina Meng Wanzhou, CFO of Huawei, will get to know Mikhail Khodorkovsky, once this obscene example of "judicial hostage-taking" is over.   Trudeau's decision to allow this New-York initiated arrest, has hurt Canada badly.

So our "Prime Minister" - Justin Trudeau - is now every bit as much of an embarrassment to us, as Emmanuel Macron is to France.  Trudeau prospers, because he is very politically smart.   But by kowtowing to Donald Trump's America, he has done spectacular damage to Canada - and to our future relations with China.  He asserts this is not the case, but he cannot be that stupid.   This "airplane arrest" should not have been allowed, and I find it as profoundly offensive as when Putin's agents grabbed Khodorkovsky.  We look like terrorists.

Ms. Meng broke *no* Canadian law, and for her to be arrested in this manner in Canada is absolutely wrong.  It is an error, and should be corrected immediately.  And this remains true regardless of what your opinion on China is.  We are not required to break Canadian law (our "Charter of Rights"), in order to assist American political strategy.

[Dec. 08, 2018] - This is a nice tribute to John Lennon, sung by David Bowie - "Imagine", from the 1983 "Serious Moonlight" tour.

[Dec. 07, 2918] -  Tomorrow is the anniversary of John Lennon's murder.  He was shot down while standing outside his New York apartment building, December 8th,  1980, by an American assassin who was seeking infamy.

Image at right shows the MCL-P3 model results for TSE-listed stock, symbol: CM.  This should not be considered investment advice, just a demonstration and presentation of the results of a mechanically calculated, computer-driven exercise.  The P3 model suggests a significant over-sold condition is now evident for this security.  (Full Disclosure: We have a position in this security.)

[Dec. 06, 2018] - With the possible exception of the folks in the Cdn astronaut program, or perhaps Elon Musk, I sometimes think I might be uniquely lucky.   Yes, I am still long, but the AI's are making it clear that I will be ok.  I post this here to establish the truth of this approach.  Today, when we were down 700 points on the DJIA, I was feeling a little worried.  Perhaps it was the end...?  Looked pretty bad.  But by day's end, we closed down, only 79.40 on the DJIA, and 234 down on the TSE. 

Canada is always a puppy to USA, and it is on my list to craft a political enterprise that can fix the Trudeau failure-model.  But I realize I can't do everything (anything - but not everything...).   Tomorrow, we are likely to be up, if B. Baruch and the models are to be believed. 

[Dec. 05, 2018] - I did not see one news story on the Paris riots in our local MSM.  Amazing.  Yes, Paris is burning.  I've been reviewing events there, and suddenly, the "Brexit" looks like a very good idea.  The UK Prime Minister May has failed to do it right (the Brits will still get screwed by heavy Euro-taxes), but a new PM can fix that error with new bit of legislation.  The "Hard Brexit" is the correct way to go.  But it will take new players who see the Eurofraud for what it truly is - just another big tax.

The French riots - by the well-organized "Yellow Vests" - are based on real anger.  Skip the MSM drivel about George Bush (he was *not* Winston Churchill), and google "Paris Riots 2018 France Taxes" and check the images of Paris burning.  Trump was right on this one.  OECD reports France now has the *highest* tax rate per GDP of any modern nation, (detailed data going back to 1965) and has now surpassed Denmark (the "Nation-of-Taxation"), taking now 46.2% of its GDP back in taxes from the people of the country.  The only good, high-paying jobs in France are government jobs.   (Carlos Ghson ex-boss of Renault-Nissan would agree, I am sure!).  If you live in France, and are not a government blade from the École nationale d'administration (the school for elite, top-tier civil-masters), then your life will be to service "les boys" from that interesting special place.

Macron's grand plan to fix this problem: Jack up gasoline taxes, to fight "climate change", and cut back nuclear-powered electricity production (and make it more costly for French householders and businesses to live)!  What an astonishing and outrageous scam!  This would have driven most North Americans into the streets, also.  Macron and Mohammed bin Salman were overheard compairing notes on social repression strategies, at the recent G20 conference (by the "Hot Mike" folks...)  Interesting snippets were heard...  Remember - Top Bosses of the World - there are no secrets anymore, guys.  We are watching you!  Cool 

If I were a "Yellow Vest" in Paris, I would direct my actions at the ENA, and avoid setting fires in the streets of Paris.  But French political-economic awareness does not reach the same level as North American understanding, so their problems only compound over time.  (The French think we are stupid, and we *know* they are unwise by just reading their history.  We agree to disagree.  But their wine is good, even if their politics is beyond tragic.)

Here is the Reuters note from the OECD.  Unless you are a rich "ENA Government Worker", life in the European Union (the "People's Republic of Taxtakeistan"!), is just a costly grind, and the grinding gets worse, year-by-year, apparently.

[Dec. 04, 2018] - Some days, I am sure I have fallen down a rabbit-hole.  Isn't it time for a bucket-shop drive?  What about the SOES bandits?  How is Drexel doing?  How's the gang at Lehman Bros?  Does Livermore still get all the green lights as he motors into town?  "Would you have confidence in me Sir, enough to let me borrow your watch, and return it tomorrow here at this same time?"  Today, as I watch the DJIA down 592, before the big day of mourning tomorrow (I'll say..), I am reminded that nothing new ever happens in the markets, or in the politics of this planet.  Oh - there's the ticker. Now down a nice round 600 points.  Not too big a deal, when the index is at 25K+.  But it reminds me of that time back in '87. 

Really, the most valuable tool we have at our disposal, is our own imagination.  Guys, make sure to listen *carefully* to the warnings that your wives and girlfriends give you. As Hamlet said to his old school-buddy: "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy."  I'm reading a detailed piece of scholarship (true, cross-referenced, fact-checked, written by a Phd type, etc.) and it offers evidence that nothing is really new. 

Folks (and their praxis [:accepted practice or custom]) here on Dirt just do not ever really change. 

Ok, now down 774 on the DJIA (mourning, did you say?) and still falling.  JPM has lost 5.35/shr  (about the entire value of GE shares now), C is down to $6.123/shr (pre-the 10-for-1 consolidation), and things are looking somber. Even grim...  Will be interesting what the MSM makes up today to explain this. The BDS (Bloomberg Disinfo Service), suggests this is due to "trade hopes fade, yield curve flattens".  No mention of it being because Trump and Pompeo are giving too many kneelings to Heeber & the Bed-Sheet Boys, thus causing the World to lose faith in American power.  (Read Roman history, lads.  When the power at the centre crumbles, & you can't fluvius enough denarii to the sagittariorum, problems begin to compound rapidly, especially if the tax-take is down.)

The mkt-pause tomorrow is a good idea.  Folks in the Excited States can reflect on just what they really want.  Do everyone a bit of good.  I think I'll go skate on the Rhine.

[Dec. 03, 2018] - Curious stock market in Canada.  USA is going straight up (DJIA futures up 450+ pre-open, DJIA still up +270 to 25,814.  Canada is flat to down (TSE index up just 9.96 pts, at 1:10pm -> basically, no change..).  Strange market here.  Oil is only a tiny part of Cdn GDP.  But Canada is either a gathering storm of disaster, or the most aggressive BUY of the 21st century. My models and AI's all say "go go go", but our markets are like King Arthur in Monty Python, yelling: "Run away!".

Our financials should be 10 to 20% above where they are now trading (just based on their tier-1 capital bases, never mind the near 5% dividend yields and discounted future cash-flow streams...)  But we are priced ilke the Argentina of North America, curiously.

[Dec. 02, 2018] - Wall Street Journal reports results of CIA research, that shows that it is certain Saudi "Prince" Mohammed bin Salman knew of, and almost certainly, ordered the murder of American-resident Washington-Post journalist, Jamal Khashoggi, at the Saudi Arabia consulate in Istanbul.  The CIA/NSA also intercepted explict text messages last year, (2017) where Bin Salman discussed how “we could possibly lure him outside Saudi Arabia and make arrangements”.  This information is part of a classified CIA assessment, based on the SIGINT (signals intelligence) communications intercept efforts, which were explicity detailed by the Snowden materials, as well as being common knowledge among anyone in Britain or Canada who had interest in, or involvement with, computers, telephone-systems, or internet communication technology in the last 50 years. 

Signals intelligence and communications decrypt technology is the reason why Hitler's Germany was defeated in less than 7 years.   Virtually all the Nazi codes were broken and read - even the high-level, high-security "tunny" material that Hitler's generals used.

With the rise of the internet, and somewhat more secure communications (based on factoring big numbers, rather than spinning metal disks) massive advancements have still been possible in the "watch everything, exfiltrate everything, exploit everything" approach that is the modern world now.  There are no secrets anymore, but the rule of law remains, does it not?

The great tragedy is that - in an attempt to advance American interests - the current American leaders are failing to act with a sense of justice and integrity that has previously defined American foreign policy. 

Trump does not make America "great", by going on his knees to a Saudi "Prince" who murders journalists.  And Mike Pompeo's comment that US legislators are "caterwauling" about the Khashoggi murder is disturbing.  Pompeo has stressed the importance of "American interests" in determining the correct response to Saudi Arabia.  This is beyond just being sad, as it appears to explicitly condone and support a criminal act, which under US law, is itself a criminal act ("aiding & abetting" - which has a long history in American common-law).

There is concern now, that the US intell-services knew Khashoggi was being targeted by Bin Salman, and that Pompeo (a former head of the CIA) is attempting to downplay-spin the event for various unethical reasons.  If the USA knew what was likely to occur, and did nothing, then all involved are guilty of a criminal act, under American law. 

We cannot be sure of Mike Pompeo's "mens rea" (mental state), but it is clear he is saying that Saudi "murder squads" working for the Saudi "royal" family members to attack and kill journalists, is to be condoned, so as not to upset "American interests".  This is explicit. 

This may not make him a criminal, but it does do great damage to American "soft-power", in that it proves beyond any doubt that America is not what it once was.  What would Washington or Jefferson say here?  

History has also shown that American interests have rarely been successfully advanced in foreign adventures, where the USA has provided explicit military support to violent, non-democratic, totalitarian states-entities that murder their citizens.   Saudi Arabia is not Vietnam, and 2018 is nothing like 1968.  But we all do best, when we study and replicate that which brings successful outcomes, rather than when we emulate the patterns of failure.   Even Trump must know this, and realize the current approach is fraught with a high risk of eventual failure.  Saudi Arabia is bogus, and some of it's "royal"  types were *directly* responsible for the Sept. 11, 2001 terror attacks on New York and Washington.  Supporting this entity does not and will not make America "great".

Perhaps America could transfer it's military bases to Qatar, and seek to engineer a more neutral stance in the Persian Gulf.  The Saudi-supported warfare in Yemen is horrific.  It is almost certainly not to be contained.  The continuing US approach risks becoming an ever-expanding horror-show of failure and compounding violence as things are structured now.  No one will win anything anywhere, if America continues down this path of moral failure.   America needs to demonstrate "greatness" by being good, and not by supporting evil.  It is really that simple, guys.

[Nov. 29, 2018] - "When the going gets weird, the weird turn pro." - Hunter S. Thompson (or so I am told).  Too bad Thompson didn't live to see this curious world.   We have to engineer rapid and sustained economic growth, else we will all choke on our own fumes, or die fighting over food.   Bravo to NASA for the successful INSIGHT robot-spaceship landing on Mars - first pictures look pretty cool.  About time we sent some people there - (like the B-Arc, perhaps?). 

[Nov. 27-28, 2018] - It is a tad sad to see the internet, this magical communications medium, become what it is now - adverts, selling, and political propaganda.  Add also cruel behaviour and nasty deceptions, as well.

Here is Tim Wu's take on why the internet is awful now:

[Nov. 26, 2018] - Memo-to-self: Try to hold the winners.  Took profits in some trades, and any of them would have generated positive delta's. Oh my.  Hey ho, and up she rises!  (Call me Ishmel, I guess).  DJIA up 348.  Dead cat bounce?  Not what my AI's said.  So, ah, why not hold the positions that I at least put on...?  Hmm...   It irks me when my machines see the future, and I do the Roman senator thing, and get all sober-second-thought on action.  

I have this economic model that suggests that rising real returns on capital correlate with rising interest rates, and rising asset prices.  It's the base-line mechanism of the business cycle, and the taxation of profits has been the primary blocker holding the whole process down.  We may well see a DJIA of 35,000 within 18 months.  

Reading about Akira Kurosawa's life.  Astonishing times he saw.  He saw his big home city - Tokyo - smashed and burnt to bits *TWICE* in his lifetime... the Sept. 1st, 1923 earthquake and fire which destroyed the place (>100,000 dead), and the March 1945 American firebombing (Tokyo was a city of wooden buildings), which again completely destroyed the city (>125,000 dead).  He writes about being a young boy at the time of the 1923 quake and fires, and seeing the Sumidagawa river on the day after, in the burnt destruction of downtown Tokyo, running red with brick-mud, - and seeing *hundreds* of bloated corpses floating face-down in the water with their anuses open like big fish-mouths - even babies tethered to their dead mothers looked like this.  And all this was bobbing gently on the river surface.   And then, as a middle-aged man, he saw it all happen again, this time the result of careful American incendary bombing.  

When he made "Rashomon" in 1950, the gate was carefully constructed to look like the re-occuring standard picture-of-the-world that he simply saw with his own eyes. 

What happened to Tokyo - twice - in the 20th century, makes the 9/11 attacks on the USA back in 2001, look like just a bad day at the office.   The key tragedy in the USA now seems to be a failure of the imagination.   

The thugs who murdered Jamal Khashoggi thought they were acting in secret, as did that evil creature, Prince Mohammed bin Salman.  But we saw *everything* they did - even when they drained Jamal's blood into the sink, to make air transport of his dismembered corpse easier.  Somehow, the death of one honest journalist can fit inside a human imagination, easier than those burnt rivers with hundreds of bloated corpses.  And it is inside our minds, with our imaginations, where events must first be understood.  Only then, can we take aim at the problems, and begin to craft solutions.

[Nov. 25, 2018] - The anniversary of Yukio Mishima's protest, Nov. 25, 1970, where he took his own life.  I often wonder if it changed anything.  Problem with history is that it is always happening, and only the passing of time can show the truth of things.  I managed to find and play with a BCL 102 gen 2, and was concerned it's bolt-release was sensitive. (I like all mechanisms to exhibit perfect build quality - why not, eh?).  Also looked at a Derya MK12 - interesting.. a cool turkey gun from Turkey.  And then I found a great Black-Friday/Cyber-Monday price on a T97 gen1.  The happy luxury of choice!   

Tech note from the edge:  Had to swap a good-for-bad ethernet card in a two-card network box, which routed LAN to bigger net.  Got the cards installed and working, configured correctly, but IP forwarding would not work. You have to turn on NAT (network address translation) to get the cards forwarding packets to each other.  If running Linux, and you have device "eth3" connected to the internet, and device "eth2" connected to your LAN, you can use "iptables" to turn on packet-forwarding:  (as root, from command line:)  " iptables -t nat -A POSTROUTING -o eth3 -j MASQUERADE ".  That will let your LAN boxes hanging off device eth2, see the internet, which is plugged into device eth3.

[Nov. 22, 2018] - "I'll miss this system here.  The bottom's low and the treble's clear" Listening to Townes Van Zandt tonite.  Figured out how to fix the "dac_read_search" SELinux errors I keep getting on my CentOS 7 HP box.  I'll write it up later.  A run of successful trades... worked pretty good, I am pleased.  Put Pet Shop Boys "Go West" on the big-screen, and the big-stereo... "We'll find our promised land - we'll find a place, where there's so much space..."  Just hilarious.  "Go West!  Life is peaceful there - where there's so much air!"  Love the lyrics to this tune. "Sun in winter time... we'll feel just fine!!"  (And ya know what?  It's true!)  Tomorrow (today later, actually), gonna try to buy a BCL 102, gen. version 2, 'cause we build them here, and they work good, and Black Friday has come north.  "I laughed and shook his hand, and made my way back home..."   (from David Bowie, "The Man who Sold the World".)  Trouble is, the BCL looks like it's sold out, everywhere. <sigh...>

[Nov. 21, 2018] -  The disappointment with Trump is very real.  People who are right-of-centre, and welcomed a strong and decisive US president, are quietly disgusted with his response to the Saudi-ordered Jamal Khashoggi murder.  Khashoggi was a US-resident and a journalist for the Washington Post, and Trump has said his murder is not worth risking the possible jobs created by US arms sales to non-democratic Saudi Arabia.  This is wrong. 

If Trump were a real leader, all he would have to say is the weapons sales are contingent on Mohammad bin Salman being arrested and put on trial for murder.  Perhaps bin Salman might be acquitted due to insufficient evidence - but at least the US could still stand tall on the world stage, having taken a firm stand.  The ball would have been in the Saudi's court, and they would have had to act.  The US-Saudi Arabia alliance could have been preserved with honour and dignity.  And MBS would have been held to some limited degree of accountability.   But to just say, "Ah, whatever..., we need the jobs and the money" stains the United States with something very dirty that will not be washed away.  

Trump's moral authority, and the moral and political authority of the USA, has been seriously degraded, and the entire world is much poorer for this act of political cowardice.  Trump's behaviour here is grotesque.  And worse - he appears to be a puppet.

Does America really want its leaders to operate this way now?   If you have a job, working for Boeing or Gruman building warplanes for the Saudi Arabs, how will you feel?  What about if you are in the military, or maybe you work for the NSA or the CIA, and have heard the torture-murder recording of Khashoggi being killed.  Will you retain loyalty to your "Command-in-Chief"?   My social-economic models predict that this will not be the end of this awful affair.  The USA has taken a wrong turn here, and history suggests a correction will be required.

[Nov. 19, 2018] - Trump is in trouble.  Despite CIA report the Saudi "royals" are murderous gangsters, he is defending them, as predicted.  Even Republicans recognize that Trump is owned by the Saudi gang, and that his defense of bin Salman is wrong.  But Trump is a Saudi guy, and it is becoming clear.  Folks in the "intelligence community" fear Trump will now try to fake-up a war with Iran.  IC folks fear a repeat of the Bush fake-up of intel data used to justify the Iraq war - the "weapons-of-mass-destruction" lie.  That's why CIA went public with Khashoggi murder report. 

CIA (and rest of 5-i) knows Mohammad bin Salman ordered the hit on Khashoggi, and that bin Salman is very bad & dangerous.   And they *know* Trump would bury this critical intel, and demand faked intel to allow bin Salman to hide within deniable doubt.  But there is no doubt here.  The Saudi "royals" and their henchmen are toxic evil - the worst actors on the stage - and USA is bootlicking them.   This action does *not* make America "Great".  This action makes America look like a stooge and a wart-covered toad, burrowing in the mud and excrement of a Big Lie.   Trump risks going down in history as the president who covered America with shame.  The other fear in the IC, is that Trump will try to fake-up a war with Iran.  There is a real chance he might try to do this, in conjunction with the Israeli militarists.  But Iran would get Russian assistance.  So we really are close to initializing a scenario which could morph out of control very quickly.  If bin Salman is not removed, Trump might have to be.

This may well trigger a market reaction.  Most folks have no idea how serious the torture-murder of Jamal Khashoggi has become.   Saudi Arabia is not a democratic entity.  It's true face has been seen clearly, and America cannot maintain any moral integrity if it continues in the alliance with such an entity.  America remains a quasi-democratic entity, and it requires it's moral integrity to function, despite what some argue.  But if Trump continues to run cover for the *proven* Saudi gangster regime, he risks doing great damage to America, and the ability for America to have dominant influence in the world.

What Mohammad bin Salman did to Jamal Khashoggi is no different than what bin Laden did to New York and Washington with his hijacked aircraft.  Bin Salman will have to be arrested and put on trial, or the Americans will have to remove Trump.  The current political structure is not unstable, and some resolution will have to be crafted.

[Nov. 18, 2018] - (check out the "WW3 Scenario" option sub-menu. A roadmap for Middle East peace...after the war.)

[Nov. 17, 2018] - I've been asked: "What is 'information weaponization' (IW) ?".  My definition is: "Information management done to produce an outcome similar to what happens in a military action.  You engineer a victory for your side, and a loss for your adversary."   From  "Propaganda is an example of weaponized information: misleading or biased information of a political nature that is usually spread by governments. ... Weaponized information is one form of social engineering. The presentation of the information may be skillfully crafted to exploit common cognitive biases and errors." 

Exfiltration of private communication is also a method by which information can be weaponized, and this has been well understood since ancient times.  (It is why military agents typically execute spies for treason).  

But the systemic exploitation of cognition errors and cognitive biases is also part of IW now.  If you can make your counterparty believe something untrue, you can explicitly direct his behaviour toward an outcome that provides a victory for you, and a loss for him.  One need not waste time with "Game Theory" abstractions.  You only need to disinform your enemy sufficiently so as to successfully motivate him into making decisions and taking actions which ensure the successful outcome of your strategy. 

Once AI technology is deployed into the domain of modern neural-science and behavioural economics, I now believe that the Elon-Musk-Scenario ("AI, sufficiently advanced, may be used to seriously damage humanity.") might actually be both possible and likely.  If I had access to a complete market-beating AI, would I use the wealth generated for me, in a positive way?  Or would I use it to accumulate power, and begin behaving badly?   

My market AI stuff works annoyingly well.  I am seriously running into a "human problem", where the suggested trades are *very* uncomfortable, and I am not putting them on.  This is a real problem.  I don't like what the AI is telling me to do, so I often don't do it - and then I watch the market trade down to the point indicated, (I don't act), and then the market reverses, and shoots off, and I see that the trade would have made thousands.  I am finding this process deeply unnerving and unpleasant to say the least.    As humans, we are weak, and full of fear, and this makes us *very* easy to manipulate.   Our cognitive apparatus evolved for jungle-survival, and watching the grasslands.  It does not work well at all, in the high-stress, high-cost, high-reward environments of the digital netherworld.  At least mine does not. 

The weakness in most formal systems, is visible in the bathroom mirror.  And it is why in the "AI's versus Humans" struggle (which is already very evident in the stock and bond markets), the AI's will win, and win consistantly.   This suggests difficult times ahead, in a number of unique and non-predictable ways. (For example: It means professional money managers will be outperformed by an AI.  Not just sometimes, but every time, every year, year-in and year-out.  That job is basically an anachronism now. [My Pocket-Oxford defines anachronism: "An out-of-date thing"] )

But if World War 3 happens, then perhaps we need not worry much about job descriptions.  As a clever guy I knew who has since died, once reminded me: "Libraries will get you through times of no money, better than money will get you through times of no libraries." 

If we are entering the beginning of a "Time of No Libraries", then investment strategies may be measured in survival of one's farmland, and wealth ranked by the number of cartridges one has in storage.  Big Grin

[Nov. 16, 2018] - From market perspective, news is noise. It's mostly poo-throwing monkeys and old women banging on tin pots.  It just exists to get your attention, and then mis-direct you.  Probably that is the first rule of investing: Mostly, ignore news.   And in this internet age of the "digital hive-mind", almost all inbound organization-generated information is aggressively designed to mis-direct.  It's not just lies and fraud.  It seems to be *all* datasets generated by commercial or government organs.  This may turn out to mean that most "big data" is a dangerous fraud, and that the "cloud' will prove to be a toxic trap.   Perhaps Elon Musk is right.  I see direct evidence of AI technology being deployed.   And it does not look like it is helping us.  It is helping a small few take control more effectively. Democracy is like the devil, in that it is the details.  And the details show it is failing in the USA.  Judges with political views are determining election outcomes. 

And most folks don't realize Trump is doing what his is doing, because he can do little else.  His locus of control is very small, and so he acts where he is able to act.   Being US president is a sucker's job.  There is almost no authority.  One is simply set up as a target for the professional governmentalists to attack. 

If I were Trump, I would have everyone on that "Mueller" strike-team arrested for treason, the Washington Post shut down as a security risk, and Jeff Bezos arrested for tax evasion.  One cannot fix the problems external to one's house, if the one is under assault from the inside.  Peace and security begin with loyalty and harmony-of-action.  I don't like the Chinese communists, but they demonstrate they know what they are doing, and are doing it rather well.  America, on the other hand, looks like it's badly broken.

At some point, the toxic sh/it-show of American politics is going to be reflected in the markets.  Either someone will try to take out Trump, or his supporters will act against his enemies in a more direct, effective and co-ordinated manner.  The current situation is deeply unstable, and this instability will manifest.  The intense anger that many people feel will not be ameliorated by faking the election results.  And people rarely bid up prices for paper-securities in anger.   I see lower valuations ahead, and a retreat to defendable positions on all sides.

[Nov. 15, 2018] - American Airlines pilots explicitly state they were unaware of the automatic "force nose-down" (some are calling "auto-death-dive") software in the 737-Max Boeing aircraft, that could activate in the event of air-speed sensor failure, even if the auto-pilot was set to "manual flight" mode.  Pilots - rightly so - very much dislike "automatic" control systems that explicitly prevent a pilot from exercising control over their aircraft.  The fact that such a control system was designed-into the 737-Max, and that then this feature was explicitly hidden from pilot-operators, is disturbing, to say the least.

We may see Boeing share price do an auto-death-dive of its own.  If it can be shown that Boeing hid this information to enhance market-uptake of the 737-Max, then their liability could be significant. Fly-by-wire is bad enough, but "auto-fly-by-bad-software-even-when-you-think-you-have-switched-auto-fly-off" suggests the aircraft design process is broken.

This dangerous trend towards "loss-of-private-control" is again evident here.  The NSA and CIS explicitly corrupt the integrity of our telecommunications system (mass cable-tapping STORMBREW and BLARNEY systems), degrade security of our computers, networks, and cellphones (FAIRVIEW, DROPOUTJEEP, etc.), and mandate "back-doors" which are operational in almost all machine-systems now.  And aircraft designers are inserting *hidden* code (which they know pilots will not like or accept), into their aircraft control systems, to address design issues they know are evident (the 737 has the reputation of having the glide-characteristics of a set of car-keys).  Laminar-flow wings (which are more aero-efficient), also have the nasty characteristic of violently "whip-stalling" if airspeed is too low.  So the designers sneak in a "just push the damn wheel forward if we detect an incipiant stall" hack into the flight-computer code.  And Boeing elects *not* to tell pilots about this (as no-one will want this loss-of-control override).

People need to realize just how wrong this modern approach is.  Computers and cellphones should be *fully secure*, and not have any back-doors by design. Our communications grid should not be hacked by spyware to assist spys.  Aircraft flight-computers (which do most of the flying), need to be free of "design-hacks" which degrade and impair pilot control.

Our modern systems are being used to build a world where private, local control of the technology we need and rely upon, is being deliberately corrupted so that centralized control can be asserted and maintained.   Not only does this attack our basic freedom of action as human agents, it can also just kill us.  

[Nov. 14, 2018] - As suspected, there is strong evidence that bad software caused the Lion Air crash in Indonesia.  Pilots are now reporting that automatic the "force-forward" of the control-wheel (to pitch the nose of the 737-max downward), was not documented in the Boeing aircraft flight-training manuals, was not included in pilot training, and when activated in level flight (as the result of false air-speed indication, possibly due to plugged pitot-tubes or other flight sensor failures), could cause the nose to be forced down, and that this apparently could occur even if the auto-pilot was switched to "manual flight" mode..(!)   If this is true, then Boeing is probably in serious trouble as an aircraft manufacturer.

[Google "pitot tube" for explanation of this sensor.  Failures of pitot tubes, which measure air-speed, can result from insects building nests in them.  This is a *very* common problem]

Bad software can kill people.  And as a software specialists, we *know* that most software is bad.  That is simply the truth, and results from the nature of the software engineering process that is considered acceptable now.  Beta-level code is pushed into production, and upgrades are made later, with the production platform being used as the test-bed for the code.  This practice was unthinkable in the 1970's & 1980's, but has gained currency with the economic success of companies like Facebook and Google, which deliberately engage in a strategy of constant, aggressive software change.  Facebook actually brags that it writes and deploys into production software that "breaks things".  

But in Boeing's case, the code worked fine - the flaw looks to be in the design itself.  Imagine the pilots, now seeing another example of incorrect airspeed (there had been 4 previous flights with airspeed sensor errors).  They turn back to the airport, and in the turn, the wheel is violently pushed forward by this anti-stall feature. "Are we in "manual mode"?  "Yes, we are!"  "WTF!?  The wheel is hard forward!!"  The aircraft is now accelerating towards the ocean, the airspeed still registers 80 kts or some bogus value.  This could happen if all the pitot tubes were full of spider-nests, dead bugs or dirt. Reports indicate the pilots have not had any training on this special automatic "nose down" system, nor is this "feature" documented in the flight manual for the aircraft.  Before they can determine how to disengage this idiotic software, the aircraft hits the water at high speed and explodes into bits.  If this is the scenario that happened, then Boeing is at fault.

[Nov. 13, 2018] - Sometimes, I think the markets have discounted the heat-death of the universe.  GE traded down to 7 3/4, (hatsize - but I have a fat head), and then today popped up to 9.  Could have been a slick trade.  But I am far too far from the action for the fast stuff, and am also having ongoing problems with the wi-max link, as our ISP is overloading the tower we are using.  

[Nov. 12, 2018] - Smell the fear?  Dow Jones Ind. Ave. down 602 today, and N225 in Japan already down roughly 3% as of 9:30 pm New York time.  I am pretty sure this is just the beginning.  There is a wire-service report circulating this evening, saying that a nation-state level attack diverted Google's net services traffic to a China Telecom site, as well as a Russian ISP, Transtelecom.  A Nigerian ISP called MainOne was also involved.  Details are unclear at present, but some suggested it looked like a "cyber-war-games" experiment, which is apparently not uncommon.  The trick is called "border gateway protocol hijacking", and basically involves fiddling the addresses in a big backbone router to vector all traffic for a specific group of IP#s (Google's services, in this case) to a falsely defined router which can then hoover up all the packets.   What made this unique is that it lasted for around two hours.  Google is reported to have said simply: "It was external to us", which is a smart thing to say, because that is probably true. 

The internet (and our democratic political system, also..) are built on trust.  Trust is like air.  You don't notice how important it is, until it is removed.  As trust is removed from the system, we may see something like the mouse in the bell-jar, when the vacuum-pump is turned on. It may become difficult to do things.  A world of lies and fraud, dependent on a communications-grid and computer systems which have been purposely compromised, presents an algorithm for a non-linear cascade of loss-of-confidence.    Most folks at the "executive level" in most major orgs, public and private sector, have no clue how far out we are into this curious realm of potential instability.  We are out on a limb that is hundreds of feet longer than the tree is tall. 

Example:  Do you trust Boeing airplanes?  The Lion Air crash in Indonesia appears to have been caused by a software error, where the control wheel is forced-forward by the auto-pilot computer detecting a loss of air-speed.  The control-wheel automatically is pushed forward by flight computer, to prevent an aerodynamic stall which can be lethal in a big jet.  But if the airspeed sensors are all wrong, this action might also be lethal.  What is the truth?  Media reports are deliberately unclear.  But we do know Boeing has sent an update on this issue to all airlines operating the stretched 737-Max.   You would not fly on one, if you distrust computer-controlled aircraft. Yet the homosexual co-pilot who committed suicide by flying the "GermanWings" aircraft into the French Alps after poisoning the pilot's coffee (so he left the flight-deck), shows a situation where computer-override would have saved the aircraft and all it's passengers.   If you don't trust the mentally-ill pilot or the poorly-written aircraft software, then you solve the problem by simply not using commercial aviation services.  Trust is key to the business model of an airline.

What will a world devoid of trust look like?   A smoking ruin, perhaps?

[Nov. 9, 2018] - If you are in the USA, and sell to GE or a related company, or in any way are a supplier to General Electric, you probably want to make sure to get confirmed payment, before you ship anything.  There is the smell of Chapter 11 about them.  

And as for the market in general, one must bear in mind that there is not likely to be any "Trump Jump" in prices this time around.  The tax changes are a done deal, the economy is firing on all cylinders, and the Federal Reserve has made it clear that rates are to be moved up. De-leveraging needs to at least begin, & this will impact equity market values, no way around this fact.  The nanobots and sharp operators will work hard to bull prices (because a lot of folks are going to have difficulty with a declining market-price path), but any further bull movement risks a runaway bubble, which given the money-puffery that has taken place, would be profoundly dangerous, given the instability it could generate in the near term.  As rates rise, there will be sales of fixed-income assets into a falling market (which is what forced selling always looks like), and security prices - bonds and stocks - will most likely be priced lower so that the markets clear. 

A sensible level is 18,000 to 20,000 on the DJIA, with low-risk short rates around 4.00%.  This is not a heroic or extreme forecast.  I would expect this 12 to 18 months from now, yet I understand that lately, I am almost always wrong on time-frames, as events seem to cascade quickly now, at "internet-speed".   Maybe 3 months out might be a better call (Feb. to March 2019 time frame..).     And if low-risk short-dated notes are sporting yields of 4.00%, then low-beta stocks will need dividends in the 6.50% to 7.00% range to compensate investors for the risks they have to accept if they hold stocks.  Stock prices may not come off that much (to make 7 percent dividends), but it would not be awful or extreme if they did.  Oil prices are trending down (for now), but I notice that Starbucks has just pushed thru a 10% price increase on a standard "tall" coffee - from $2.05 to $2.25.  Inflation is everywhere now, but "hedonic adjustments" are keeping it from being reflected in CPI numbers.   (It is not just the "news" that is faked these days...).   Inflation is not good for stock prices.

[Nov. 8, 2018] - Suddenly, all the arrows are pointing down.  Curious.  Very negative indicators, restrained sentiment, and no reason buy, anything anywhere.  The usual f-news dis-info on the wires ('American investors really happy with "gridlock" ', etc...), but there is no catalyst for positive price movement anywhere on the horizon.  Too many economic agents - companies, consumers and governments - have just got too much debt.  Even if rates stay at 2.00%, this debt is a problem.  At 3.00%, the carry-costs will begin to bite, and at higher rates, the erosion of capital value will accelerate.  Our models show difficult times ahead. 

But perhaps the social difficulties are even greater. America trains it's fearless young men to be *excellent* killers, and then acts surprised at the social results.  We have similar problems here, but the US scenarios mystify us.  Why would a Marine shoot up a bar full of young women, and murder policemen?  Why does a millionaire gambler shoot up a music festival?  Yes, these are suicides, but they are carefully planned events.  Why not at least use such evil skills for a practical purpose?  America seems to be somehow, socializing a curious format of self-destruction. "Muckers" perhaps?  Are we all at risk of "standing on Zanzibar"?

See, the most disturbing thing, is that this movie and book have already been written.  We are embedded in a future-world now that is profoundly predictable, and disturbingly nasty.  Between Baysian base-rate awareness, current machine-learning AI-techniques, effective agency monitoring of all communications traffic, and the tragic characteristics of people (who can learn, but cannot change their behaviour), the agents of the powerful (and the cruel), can exercise significant social control.  Perhaps there are those who sense this, and realize that this lack of options and check-mated life of boundries and limitations, makes life intolerable?  This zeitgeist is deep in literature.  Huck Finn could "light out for the Territories...", but there are no such places now.   Richard Russel steals an airliner, does some wild aerobatic flying, and then "calls it a night".   Is the future of America, collective suicide?  That will be unfortunate for us all, I fear.

[Nov. 7, 2018] - So America had a big election.  Wow, changes everything, right? Or maybe not at all?  The eternal media-contest between the "Demo-Rats" and the "Re-Bubba-Cans" has become a weary, ongoing charade of rhodomontade.  The whole political process in the USA looks bogus.  You get two choices at the polls, and neither is attractive.  The voter in the USA faces not so much a "Hobson's Choice" as a "Hobbes" choice of options that are often nasty, brutish and a bit short on accuracy.  We have the same problem here.  We have these elections, and the "Government" always wins.  Faces change, talk happens, taxes rise.  And then the winter comes.   

[Nov. 5, 2018] - My Japanese friends ask: "What you think about big American Erection"?  I bite my tongue so I don't laugh, but to myself, I think: "Well, somebody's gonna get  ..."  What a world. 

Spent part of weekend fixing my Rails 5.2 internal webserver, which was not executing the "Destroy" link to delete records from the database.  I built gcc 4.8.5 from source (using gcc 4.4), which was pretty interesting.  Once I had gcc 4.8.5 working, I built node.js from source.  Node includes a version of Google's V8 thing, source of which can't be downloaded now.

Shame about Google.  Fortunately, there are archives.  Even after tweaking Gemfiles, running "gem install yatta-yatta" and then "bundle", I still could not get the Rails webserver to serve up some javascript to the browsers.  Turned out I had to "precompile" the "assets" (not just .jpgs, but the javascripts and .css files).  Do this by cd to /web/Websitedir/app/, and you should see dirs: "assets controllers models views" and some other dirs.  Run (from command line interface, if Linux):  "rake assets:precompile".  This is critical, as it creates the javascript files that bring up the little "Are you Sure? Y/N" dialog box when you click on "Destory" in a standard CRUD app, and lets the "link_to" "delete" method work.  The "precompiled" files each have long hex filenames.  "Rake" is a Rails utility. (Google "Rails Rake" for details). 

My little web-based data-management thing for news and other stuff, works again.  Happy Dance!  {#smileys123.tonqueout}

[Nov. 1, 2018] - Happy Day of the Dead! Now don't think I don't have greatest respect for cryptanalysts - being a fan of both Turing and William Gibson.  Here is URL for the "Cryptographer's Toccata".  GEMESYS Note: Render at high volume:...

[Oct. 31, 2018] - Happy Halloween! The Scary time of year!  Mkt recovering, but fear remains.  Wanna see something *REALLY*  scary?   Check out the Spook Budget = NIP + MIP growth over the last 14 years.

USA  is planning to spend  $81 billion on "Intelligence" activity - military & civilian, in 2019. 

That is some serious money.  The USA could colonize Mars with that much money each year.  It is truly a shocking amount of cash, all lifted from the pockets of USA taxpayers..  To see how the "Spook-Budget" has ballooned since 2005, check out the cash-table shown on the URL below.  Most scary damn thing I have seen in years.  Growth-rates like this in the "non-productive" government sector cannot be supported (ie. costs doubling every 14 years).  This is a *proven* pathway to economic self-destruction (just ask Victorian England, or Soviet Russia).   If you grow gov't spending like this, you *will with certainty* wreck your economy.  As an economist, let me just say simply:  HIstory shows this process is not sustainable.  One day, once *everyone* hates you, and your military is stretched to the limits, the Rhine will freeze over, and what you have brought to others, will be brought to you. Rome didn't "fall".  It chose to "fall on it's sword."

At the very least, take a spin thru: J M Keynes: Economic Consequences of the Peace, written in 1919. When Keynes (who was on the Peace Commission), saw that UK and France were going to crush the German economy in the "Peace Treaty", he resigned in disgust, and wrote this book.  Even if you just read the first Chapter, take a look.  With "Brexit" looming, Europe destroying the Euro experiment of "sound money", and the USA in "self-destruct" mode, it resonates well with our modern times.

The current US policies in the "Middle East" look to be batsh*t crazy insane, completely designed to create crisis, magnify conflict and destroy opportunity.  You can't spend your way to wealth, cheat your way to honesty, lie your way to truth, or fight your way to peace, anymore than - in the words of the old Vietnam vets - you can fuck your way to virginity.  Blush

The USA is rightly critical of mail-bombers and church-shooters.  So why do the spy-guys think these strategies will offer successful outcomes in non-USA situations?  The HUMINT specialists and gov't military analysts don't grasp the logical disconnect here?  Blink  Scary times all around.  But hey, the DJIA is up 429 today.  Woo hoo.  Party now.  Die tomorrow.  The future?  IBG/YBG, right?  Live well, and leave a nice looking corpse. {#smileys123.tonqueout}

[Oct. 30, 2018] - Scary times... GEMESYS fcst for GE stock price:  [* hat-size *]

[Oct. 29, 2018] - 5:17 am.. Oh hell.  IBM is buying RedHat for $34 billion USD.  I've spent *years* migrating everything from Windows to Fedora & CentOS (the opensource versions of RedHat Linux).  Everything I have built now runs on variants of RedHat Linux.  And now RedHat will be eaten by IBM?  "Upstream" will now be IBM corporate???  Yikes!  My earliest memories as a wee lad are trying to use 3270 terminals to access a VM/370 & its "minidisks" so I could run my Fortran stats stuff on time-series.  Those 3270's each looked like a giant cannon pointed right at your head.  You don't think to worry about the NSA, when you wake up to find the dripping horse-head of IBM in your bed... Unsure  One way or another, "Upstream" is going remain an ongoing source of grief, I fear. ("Upstream" being a term that is common & key to both the NSA exfiltrators and RedHat/Linux users) Sad

Reading some stuff on Traffic-Analysis and old-fashioned cryptography.  The "Index of Co-incidence" is really interesting.  Got Turing's B-Park (1941) papers also, from ArcXiv at Cornell (thx guys..).  I use something called AIQ - actual information quotient - just the quanta of true, useful info divided by total input data-info as per a Shannon number. (On the interent now, AIQ is almost zero, as 95% to 99% of info is toxic clickbait green-snot now.) 

An agent using internet data can make *better* decisions by *reducing* typical internet info stream density - like what F-4 Phantom pilots did in Vietnam war, when they shut off most of their sensor-based warning technology and just used their eyes. (Plus, the beaconing of their radar made them "bright" targets, just like data-vomiting Firefox web-browsers and Apple/Windows O/S's do for us now..) 

Turn off your lights and EM-signatures, shut the hell up, and just pay attention with dailed-up passive observation.  (Be quietly "on the bounce" as the old Army boys used to say..)  That pays off more better, & sometimes best, if the AIQ is negative (which is what deception does for you).  AIQ can fall below zero, if you are dealing with lies and fraud presented effectively as truth.  Tough world now.  News is not just fake, it is very carefully engineered - see that GCHQ paper on deception - like the "dazzle" paint-jobs on First World War ships (which was actually very effective). 

There is some really solid neuro-science that explains exactly why and how this works. ( And it explains why I get disoriented and nauseous in most supermarkets...   I would literally get dizzy from the chromatic overload - which was exactly what marketing experts wanted to happen, it turns out..  It's why humans often feel so much happier and more content in natural settings, than in cities.  AIQ becomes highly positive, and you just feel better.)

[Oct. 28, 2018] - Must-read GCHQ paper on how magicians use deception, developments in neural-science, & NLP trickery, and how it can all be used to engage in covert online dis-info/disruption exercises - what financial markets are facing currently - and is underway in the lead-up to the US mid-term elections.

[Oct. 27, 2018] - Coming up on 70 years since "Black Friday" - not the stock market crash - but the day when all Russian encrypted data went dark for the early NSA folks, back in 1948  (the Russians improved their encryption, rendering all military & diplomatic messages unreadable by the Americans). 

Tim Cook's comment about "data weaponization" got me thinking about just how bad it has become.  When Soviet decrypts went dark, the pre-NSA cryptanalysts very wisely switched to "T/A-Fusion" - traffic analysis of *all* communication in Soviet Russia - encrypted and plaintext, combined with detailed economic+geographic indexing and analysis.  This allowed a good picture of Soviet military and economic structure to be viewed, which was valuable when the Russians successfully exploded a test nuclear bomb in 1949, and the Americans needed to know if and when the Soviets might attack them. 

Trying to analyze the world markets and economic activity to locate and exploit investment opportunties is not much different than what NSA/GCHQ/CIS folks try to do.  Nowadays, everyone sees everything, yet we are also more in the dark than ever.  The NSA wisely tries to "Collect Everything & Exploit Everything" (see slide above right) - and yet, the wisdom to use the "weaponized" data seems to be declining.  The use of deception is routine, in every context now.  And sadly, it seems bad people are being supported, and offered modern weapons, while good folks who are trying to tell the truth, are allowed to be set upon by the worst kinds of evil people.

As Khashoggi was being killed, I wonder what his last thoughts were?  He probably thought "Goddamn-it..!  My friends who told me to be careful, and said I was in great danger, were right.  And I waved it away, and said I'd be fine.  What a fool I've been...! "

Hitler was apparently terribly paranoid, and insisted that the advanced teleprinter technology the German High-Command was using to communicate with, be enhanced by adding extra rotating-disks, to improve device security.   But he was not paranoid enough, as the British were able to decrypt the transmissions from the "Tunny" device, using a machine called "Colossus" - which was the very first vacuum-tube electronic computer.

Nowadays, in a world of weaponized-data, where billion-dollar government agencies are tasked with "Collect-it-all / Exploit-it-all" mandates, we grapple with the issues surrounding the mass loss-of-privacy.  You cannot alter the political system with fake mailbomb devices, as some poor fool in the USA has tried to. But if it is wrong to try to influence American politics with mail-bombs, why does the US Government think weaponized drones can change political outcomes in foreign places?  A government agency operating with a satellite link and some weaponized drones, can drop a high-explosive device right into your bedroom now.  All they need to know, is what time you go to bed!  Big Grin  Is this a good idea?  We now have mass-monitoring of all traffic on the interent, but in a world awash in evil agents - many of whom are state-funded operatives - this government weaponization of data, *degrades* our collective security. 

[Oct. 25, 2018 pm] - Bimbo fights back.  If data is "weaponized", then ya just gotta use yur "robot" to target carefully.  Xerion AI work has provided some real insight into the "October Waltz" we have been having.  Image shows results of dirt-simple SAR trade that netted me $1K(Cdn).  If I had more courage, I would have left the trade on for a few more days, and netted 4 or 5 times this amount.  But now that I am using the AI-tech for real money, figured I would be cautious, and run very small for the first while.  Sometimes, I just withdraw the net cash from the account, and put it on the desk, and touch it with my fingers.  It's all plastic now, with holograms on it.  (We are actually *much* more advanced here, than the Excited States to the south... I need to keep reminding myself of this.  I wish my American cousins could chill out just a bit.  We have our own Conservative Tree-House here too.  It's up in Ottawa, by the river.) Cool

[Oct. 25, 2018] - Tim Cook, the CEO of Apple, has made a very key point at a conference in Europe.  While the USA is involved in a media dance-of-deception leading to the mid-term elections in November, Cook is pointing out the real dangers of "weaponizing" personal data.   His point is valid, and should be given more coverage.

[Oct. 24, 2018] - The Trump administration has revoked the US visa's of the Saudi agents who carried out the murder of journalist Jamal Khashoggi in Istanbul.  So, these Saudi "security agents" actually had US visas, and the US knows their names?   What does that tell us?    It seems certain now that there will be a landslide in the November elections for the Democrats.  But that will not likely change anything.  The Saudi money is corrosive and appears to be actively corrupting US political activity.  

 [Oct. 22, 2018] -  If you love your children, or care about the future, or just enjoy freedom, eventually, you become political.  We should not do business with Saudi Arabia.  But others will.  Nothing will change unless quality people make it change.

[Oct. 19, 2018] -  The Trump administration may well be undone by the actions of the murderous House of Saud.  And scenario projections suggests this will bring instability and conflict.  Saudi Arabia lies as a great prize, which may be taken, once the Americans withdraw their support.  And the march of events will likely force this action upon them.  To go down any other path will repudiate all they claim to stand for, and incur political costs that a democratic Government will not wish to bear.

[Oct. 17, 2018]  - It just keeps getting more unbelievable.  The torture-murder of Jamal Khashoggi in the Saudi Arabian Consulate in Istanbul is starting to look much like the murder of the Archduke Franz Ferdinand, and his wife, Duchess Sophie.  That murder, in July 1914, triggered the First World War. 

This recent gruesome and horrific killing of Jamal Khashoggi - an American-resident journalist and a personal friend of Recep Erdogan, the democratically elected leader of Turkey (a nation which is a NATO-member state) - shows the true nature of the Saudi's.  The Saudi's - and their "royal family" who run the country - pretend to fight terrorism, but it would seem now, the Saudi's *ARE* the terrorists.

Osama bin Laden, the terrorist killer who organized and funded the attacks on New York and Washington in 2001, was a Saudi Arabian.  We need to keep this fact in mind.  The US was attacked by Saudi's, but George W. Bush and his associates responded by attacking Iraq, and blaming Iran, nations that had nothing to do with the 2001 attacks.  Why?

The murder-team that was sent to Istanbul included a "doctor", who was involved in the dismemberment of Khashoggi's body.  Khashoggi's fingers were cut off.  Prince Mohammed bin Salman, the "crown prince" who runs Saudi Arabia, is widely believed to have ordered the killing.  The hit team was M.b.S's own personal "security" staff.  The Turkish authorities have explicit images of each of the members of the murder team entering Istanbul, at the airport. They have surveillance recordings of the entire murder being carried out.  The gruesome assassination and dismemberment of Khashoggi's body, was accomplished with the killers wearing headphones, listening to music, to distract themselves from the blood.    Khashoggi was beheaded, in the same manner that the ISIS terrorists use to execute their victims.

The USA is now a net-exporter of oil.   Canada is also a net exporter.  And the Europeans have North Sea oil, and new pipelines that connect them to Russian oil fields, which are both willing and able now to export to Europe.  And we have electic cars that work.  I drove one last week.  They work fine.  We can all live fine without Saudi oil.

History looks like it is about to begin again, and with a vengence.  The current "royal family" regime in Saudi Arabia is finished, as Saudi money is now shown to be dripping in fresh blood.  Who on this planet could seriously consider either travelling there, or doing any business with this murderous regime?

[Oct. 15, 2018] - These are strange times.

I see Microsoft is open-sourcing their 60,000 patent portfolio.  Smart decision and a wise move.  Everything I have now, runs on Linux.  CentOS-7.4 is reasonably stable, and the market/AI/database production stuff runs on it.  The Market-AI stuff sort-of works, but not good enough to put much real money on.  It's the older, simpler almost automated "back-of-the-envelope" stuff that consistantly makes money, and provides trading direction. 

I recently made a massive shift in the portfolios, and it looks like it might work out, despite the risks.  It is a *very* crazy time now.  Yes, the markets might implode, but there is also a risk that they might run away on the upside.  We all could actually live *without* Saudi Arabia oil.  The murder of Jamal Khashoggi in Istanbul, in the Saudi Embassy, shows that we should not do business with these people, at least not until the person who ordered this killing is arrested and charged with murder. 

[Oct. 11, 2018] - My research suggests that the SuperMicro hacked-boards were more likely hacked by CIA Intel-Q-Inc types rather than the Chinese.  The idea was to drop these exfiltrating server boards in China, and poll them to monitor Chinese monitoring activity.  The whole Bloomberg story looks like it might be a CIA dis-info campaign to sanitize the entire screwup, since the hacked-boards were discovered.  Just because the boards were made in China means little - because ALL hardware (> 93%) is made in China.  Problem was the boards got dropped into a bunch of US companies in error, and that put the CIA/NSA boys in *direct* violation of US law - ie. wiretapping domestic entities without a court order.  This is only scenario-construction on my part. I have zero proof of this, since real IC types do *not* talk about their work (Official Secrets Act, Oath to the Queen, etc.)  The very fact that 17 spies talked to Bloomberg reporters in USA, basically proves there is rotting fish lying about.  Intelligence officers - current and retired - just do not talk about their work.  If many are hyper-chatting to Bloomberg reporters, almost certain that they are painting a false picture.  (I told a guy that if the SuperMicro hardware-hack story was true, it could take the DJIA down 10,000 points. So far, we are down 1,600, which is a pretty good start.  Oh my.  But I believe Apple and Amazon when they assert that the story as reported by Bloomberg, simply did not happen.  Meanwhile, we can all swim out past the breakers...

[Oct. 10, 2018] - Lovely weather, and a big 830 point down-day on the DJIA. Oh my.  But I got Firefox 60.2.1 installed and configured on a CentOS 6.6 box, which I use for video experiments.  Firefox 60 does HTML5 video, and bundles the libavcodec with the browser, so the grief-factor in getting streaming-video to work is lowered.  But the bookmarks, from previous versions of Firefox, may not import, and in my case (and for others also), attempts to import plain .html files of bookmark data, will cause Firefox 60.2.1 ESR (Extended Support Release), to crash completely.  I developed a workaround, using the ".jsonlz4" binary files that Firefox keeps in a  bookmarkbackups directory.  (See the "Firefox+Video HowTo" section on this site).  Hope it helps! Cool

[Oct. 4, 2018] - Turns out there is no "record"-level locking in SQLite3.  A database without record-locking is a bit silly. If you use SQL cmds to BEGIN TRANSACTION and then COMMIT, you have to lock the whole database, which given that RAILS and modern browsers support the PATCH method of posting, means SQLite is a poor choice, even for small, researchy applications. You need Postgress or something like that.   But I've hacked an approach together which allows effective page locking & sharing among cooperating users.   Oh, and Google removed all my apps from the Playstore because I didn't check a box saying they aren't for children. Oh my...  Logged into to developer thing, clicked check-boxes,  and re-submitted.  I think they are all back.  Someone search "GEMESYS" on Playstore and let me know, someone please..

[Oct. 3, 2018] - Example pic-of-the-day shows using Firefox 52.2 ESR (the default bundled with CentOS-7.4) in "fullscreen" mode with "menu-bars" turned off, and colours adjusted to dark-background values.  You can make an almost acceptable quote-screen, using Yahoo Finance.  (Verizon has not completely destroyed it, though they are trying..)  Linux gained it's first serious production acceptance in the harsh world of finance and trading, where MS-Windows (think the BSOD) and Apple's toys for leftist-journalists, were viewed as consumer-grade fluffcrap.  Linux/Unix made the telephone system work, and now, it makes the internet work (sort of..).  My Rails 5.2 server is running nicely/proving useful - got it working in SSL mode (w. self-signed certs), but it remains unclear how to install sane record-locking so multiple simultaneous updates on DB frames can be done correctly. <sigh..>

[Oct. 2, 2018] - Read the Kepler paper on how they used TensorFlow to classify exoplanet candidates based on the light-curve data from the Kepler star survey, and a training-set of human-astronomer identified exoplanets and false-positives (generated, for example, by eclipsing binary stars). A convolutional neural-network is trained using this training set, and then that network is used against the full Kepler data to search for additional exoplanets, identified by v-shaped downspikes in the light-curve data - which are termed "TCE's" - threshold crossing events.   The researchers used their trained network to run against the full population of Kepler light-curve observations, and found a bunch more candidate exoplanets.  It is a neat bit of work.  But it is also a bit of a "bottle of smoke", since their training data (which indicates "actual exoplanets vs. false positives") is only an opinion-base of researchers assertions - we don't know if the light-curve down-spikes really are exoplanets or not, and what we don't know for sure if their "false positives" really are false positives or not.  But still, the technique is clever, since you can perhaps teach the machine to work at least as good as the best-judgement of qualified experts.  And lets be clear - the *really* interesting exoplanets - the Earth-sized ones - make *very* small TCE down-deltas when they are orbiting at a life-sustaining distance away from their small, Sun-sized stars.    I am hoping I can get more detail on how these guys actually built their CNN (convolutional neural network).  I'd like to know in detail, how they built the TensorFlow data input pipeline.  The Kepler paper is here:

Also got MPlayer built and working on new box. A thing of note:  All the doc on the net talks about /dev/dvd as the DVD device.  But on a CentOS-7 box, the DVD device is /dev/sr0.  So, give that you have compiled and built MPlayer with libdvdread and libdvdnav, as well as the other little gem you need, you can then play a DVD full screen :

    mplayer -fs dvdnav:// -mouse-movements  -dvd-device /dev/sr0

Note: Your DVD device might be: /dev/dvd.   I had *no* /dev/dvd on my system, with the new Linux kernel (4.18.11).

[Oct. 1, 2018] - I just realized I am in direct competition with Microsoft. Blush  I am building a research-box that runs on Linux, no virtualization, and has fast Windows emulation.  Seems to work, once I got a new drive.  I bought a WD 1Tb NAS-grade unit ("colourcode":red => 5400 rpm instead of the colourcode:blue WD drives, which are 7200 rpm, and are cheaper, but fail sooner.)  NAS-grade drives (for Network-Access Storage devices that run 24/7 365days/yr) are slower, more expensive, but have longer MTBF. Paid the extra 20 bucks at Yoshiwara's.  Installed in the box, ran dps-scan from new BIOS, (ran for 2 1/2 hrs), and installed CentOS-7.4, which has kernel is 3.10. I used ELRepo site to download & install latest "mainline-stable" kernel, Linux 4.18.11.  And started all over from scratch to build the perfect *production* box.  Got it roughly done (equivalent to the top screen (from experimental box) at about 5:00am this morning.  WINE built and runs well.  Built Lynx and the tls stuff, libzmq, Python, MPlayer, DOSemu, yasm, slang, and a little basic interpreter I forked from github and am building for a tiny SBC.  Ran, got PIP, and downloaded  the Python modules needed to do math and make images.  Also installed LibreOffice this time, and hacked around with "Calc" and "Writer".  Some tailoring was needed, but I built a work-grade document (mixed text and hand-drawn graphics), and migrated some financial spreadsheet stuff from a Windows box, and confirmed it all works. Amazing.

The resulting box and environment just kicks, and beats out Windows10 and Apple <insert-mountain-name-here> OS/X style machines, both of which I own & sometimes have to use (and fight with).  What surprised me was how well LibreOffice works.  It's a tad complex (like Krita, for example), but it offers operational advantage.  I learned Linux CUPS and installed and then removed drivers for HPCP1215 laser printer (which only works on a Windows-8 box, because of HP software trickery, which disables black-ink printing if a colour toner cartridge becomes empty.)  

Excellent site for those struggling with the "tar-pit of complexity" that RedHat stuff seems to have become. I built my first RedHat webserver back in the 1990's (but got shutdown by the OSC's rules re. non-stockbrokers giving financial advice).  Running a production RHEL server now must be a challenge.  The site below documents some of the grief folks now face.  Everyone now needs to be a wizard.

Spent much of the day reading stuff about Linux history and current grief-points ( like the systemd garbolfoonnery of complexity - So a giant bag of *many* coded binary bags is better than init-scripts that a human-being can read and change (& fix)?  Sure about that?  Looks like an "Employment Security Act" for RedHat employees...)  Great site, lots of info.   And the "Tarpit of RedHat" page link below is actually very useful - has a section downpage which details steps one can take to harden a production RedHat server.  CentOS-7.4 (and I suppose RHEL-7 also?) has this brown-storm of running daemons, which makes a "ps aux" about 5 pages long.  Even a poly-thumbed hammer-hacker like me can tell it's just a bit too much.  Dr. Bezroukov's site is a treasure-trove of useful information.

[Sept. 29, 2018] - Downloaded Linux kernel 4.18 (latest stable from and ELRepo), and latest stable built of WINE (3.16) - deployed kernel, and ./configured and built/installed WINE successfully from soure - about 3 or 4 times, in various ways.  In *all* cases, WINE (and even "winecfg" in Xwindows), returns: "bus error (core dumped)", which is the machine trying to address something out in null-space where nothing can be found. In my case, hacking it down, looks to be a bad disk. (Update: It was.) Flash'ed a new BIOS (latest from HP, circa 2015) into the HPdc7900.  Ran the 'dps' disk-check/verify (from the BIOS - before Linux even loads) - and dps reports error "7", which translates as a read-error.  Dps says very clearly: "Your disk is bitched.  Go buy a new one."  (I did.)  Disk file-system is "xfs", the famous Silicon Graphics f/s which is pretty good.  Booted Linux "Rescue", umounted /dev/sda1 and ran "xfs_repair", and it reported all was ok. (It wasn't.)  And Firefox, Jupyter and a bunch of other stuff is all running *fine*.  But WINE died screaming every single time.  WINE - and everything else - runs fine on the experimental box (an almost exact copy of the HPdc7900), except it is an HP-Compaq PRO 6200, with a 4-core Intel processor (the HPdc7900 is a 2-core 3.00ghz Intel cpu).  The first image at top is from the 6200, where everything runs now (Linux kernel is 4.14, SELinux policy file updates done to fix DAC error blizzard, and setroubleshoot problems).   But the "bus error (core dumped)" thing is worse than a "segmentation fault", and looks to be because of a hard disk read-error. (It was.)  Even building all of WINE on a USB stick, and running it from there, generates this nasty awful error.  So, looks like I fire up the F-150, and drive into the Emerald City, and buy a new 1-TB drive (smallest they sell now, at Yoshiwara's Computer Emporium).  ($79.99, WD-Red-NAS)

Oh, yes.  Does your Linux fling "TPM kacks, cannot do the PCR thing..." errors at boot?  Wanna know *exactly* what this means?  Found this good read from Univ. of Eindhoven in Euroland...  "Da nasty scales uh ignorance done dropped from my eyes, and da good lite uh knowledge did finally shine thru da glass darkly..."  (I think the Cheech Wizard said that..)  The solution to the TPM (Trusted Platform Module) fail, is to enable TPM in the BIOS.  

[Sept. 27, 2018] - Bit of an exercise, but got the new HPDC box running 64-bit CentOS 7.4 (but with the original 3.10 kernel, not the 4.15 kernel on the experimental box).  Built Lynx, DOSemu, varrious security libs, Python 2.7 and slang from source, made pip with, and installed about 20 modules into Python using pip, including Jupyter and Ipython.  Still need to build WINE (WINdows Emulator - which is not actually an emulator), but it looks doable.  There is a funny bug where Firefox 52.2 (ESR) reports it is *not* the default browser, but if you have "firefox.desktop" in the user Desktop dir, it all works ok (ie. Jupyter hands the keys to the browser correctly, and you can log into the Jupyter in-session server seemlessly.). (Tried a bunch of xdg-settings fiddles, but no effect). 

So, back to where I was back in Jan. of this year. I want to build TensorFlow from source, import it into Python, and replicate what is running on the experimental CentOS74 box.  The idea is then to use a modern codebase (in 64-bit-land) to replicate the APL stats & simulator/robot-trader tools and AI thing I have built with Xerion, but which runs in 32-bit-land.  The Jupyter/Python stuff works ok, example shows the 1 million randoms histogram, generated and displayed in real-time in a Firefox browser window (right), using a small interactive Python program, and the in-session webserver started with "jupyter notebook", and running in left (black xterm) window.  A rather kinky approach, but it seems to work.

[Sept. 26, 2018] - Find I am using Linux for most work now, and I miss the old 32-bit NSYS (CentOS-6.6) box that died, with it's ancient IDE Maxtor.  I used it for real work.  The video-research box is in the living room (guess why..), so in the office/lab, have to do another build.  This time, a development-focused CentOS-7.4 box, but will try KDE instead of Gnome for desktop. Running 2605 packages onto a Seagate 500gb drive.  (The CentOS-7.4 image above shows Gnome 3 desktop running financial/economic prgms, with current kernel).  This is all old, state-of-the-shelf technology, but I have requirements for backward compatability with old DOS and Windows stuff.  Note: I had curiously difficult time telling the CentOS 7.4 installer to just wipe the old hfs disk partitions from the Seagate drive.

[Sept. 25, 2018] - Ethernet card in gateway box went wonky.  Wiggle the RJ45 plug, it would work, (and the LAN would lite up...), let go, and the lights (and data!) would go out.  Hilarious. It took *hours* to track it down...  Replaced the bad card which had the loose socket with a "new" (old) Intel card.  LAN runs faster.  Did another kinky trick.  Grabbed the Cydia "Samba" stuff (2 daemons and some monitoring/setup progs), and installed to the old iPad.  Had to run "ldid <prgm-name> -S" on each of the installed Samba binaries, so they would run.  ("ldid <programname> -S" lets unsigned executables run on the jailbroken iPad).  I use the iPad as a flat-panel Linux box.  But as a file-server for music and videos?  (It actually works not bad, if you have a local WiFi that is quick.){#smileys123.tonqueout} 

Have not found any decent open-source (Cydia) CIFS client software for the iPad, but the "smbd" and "nmbd" daemons work, and I can mount a Samba share (basically the whole iPad filesystem as read-only access) from any Linux box.  So a big directory of videos on the iPad can be served up to any Linux device, with "mount -t cifs -o user=root  -r // IPAD" assuming you have created the IPAD mount point, and the is your iPad's static IPv4 value.  You edit the "/etc/smb.conf" file on the jailbroken iPad to set the share-names and their access restrictions.  What is comical, is how well the old 2010 vintage iPad (First Generation) works as a file server.  On the iPad, you can start a terminal session, and enter: "smbstatus", and you get a nice table of who is logged into the iPad, and what files they have locked for viewing.

[Sept. 22, 2018] - Gotta refactor this site soon - getting slow...  Marathon backups got most stuff slotted onto the little Dlink RAID-box. Tired of tech for a while.  Spent last 24hr biking & reading Nihon fiction - Haruki Murakami's "Hear the Wind Sing" & "Pinball".  I really liked Pinball.  I've read most of Murakami's stuff, back around 2K+1-4, in T.  Weird time it was - busy/not-busy, 9/11, farm deal, folks died, gf's went u/s... then teaching, then radiance-tech project c/w lotsa thermal neutrons and BTI dosimeters registering success and trace T3.  NETTGLO, eh?  As one grows, fantasy morphs to memory, it seems.  Picked up a new nickel-plated Norinco NP-29 few months back - price was too good to pass up.  Had to replace a clocking extractor, but now she runs smooth as a Pinball Wizard.  Perhaps a use-case for the Chinese USB sticks presents itself here...?  (I *really* like the Nickel-Norc.  Here's a pic. of her on the desk, at right..  Some China stuff is really very fine.  They are getting better.  Strength thru the Joy of Wealth, I guess.)

[Sept. 21, 2018] - Can't trust the little USB sticks.  WIth big files, they fail "diff" tests on created files that are > 1gb, and they fail *randomly*.  (Two different copy actions result in two different MD5 hashes - each different from the source!) Oh my.  Like the way MacBooks do floating-point.  Most of the time, it sorta-works.  But maybe not 100%...  Guess that is good enough for folks now, eh?  I had an expensive quartz-tuning Technics receiver go U/S completely this AM - dead as a road-killed toad.  Pulled a 1970's era "Ravel" pure-analog receiver out of the closet, complete with it's *wooden* case, wired it up and fired it up.  Works fine.  Nice sound. Big transistors, wide as a thumbnail, held down with screws.  I know I'm sounding like an old guy here, but our modern tech is not doing the job, is it?   

Also dug up a new-in-box Dlink-DNS-323 Samba-box, and a couple of shrink-wrapped 1TB matched Seagate drives - slid 'em in and config'ed a two-disk NFS RAID box, (Mirrored, not striped).  It's little blue-light LED glows and flickers happy as I dump bytes to it.  To mount with modern CentOS 7.4, you might have to tweak the version of the server-message-block code to see it.  But after a bit of fiddling, I have all the machines seeing it.  It tests out 100%.  I checked it by copying a bunch of big .tar files and then running diff on source vs target, and everything reported "identical". 

[Sept. 20, 2018] - Another WTF-moment in my Adventures-in-Dumperland. A big Android-NDK tar-file failed "diff" check after backup copy.   (My first computer when I was a tiny child, was a DECsystem 20/20, and we had to make backup-tapes using a tool called "Dumper", DEC's version of "tar").  I copied all my Android stuff to a 32gb USB stick.  The app big files (the .apk's and such), all checked-out ok.  So, then I started up a cp session to copy all the *.tar files to the little stick, and we went to Costco to get some steak, Brie-cheese, gyoza, and plutonium. (Just kidding on that last one.)   The first .tar (a biggish ndk-linux for x86..) checked ok with diff, but the 1.2 gb Android SDK for linux failed diff test ("... files differ."), and generated completely different Md5 hash values.  Oh my!  I umount the stick, and run "e2fsck /dev/sdb1" (all this work is on the new(er) machine, which sees the USB stick as "sdb", rather than "sdc" for the borked NSYS box which has two HD's).  The USB's ext2 filesystem reports clean.  But the copied SDK binary file differs from the sourcefile, as do several others.   Everything *looks* ok, and no errors are reported during copy.  So basically, I need a "copy with verify".  And the USB sticks really are NFG.  I will have to build a RAID-box, and run it as a network storage for backup use.  Got a NIB Dlink 323 box, and a couple of 1TB Seagates, still in shrinkwrap.  Will set that stuff up as a 2-disk RAID.  If I can get my own PGP encrypter prgms running, I can maybe use AWS/cloud backup also.

[Sept. 19, 2018] - After a bit of flailing around - I determined a working protocol for copying to USB sticks:  Used "fdisk /dev/sdc" to remove existing vfat32 partition, wrote a Linux (type 83) ext2 partition (no journalling) as primary partition #1, started it at cyl 16, and used the ending cyl as default.  Wrote the new partition, and exited fdisk.  Then, used "mke2fs -t ext2 /dev/sdc1" to format the stick.  After format completes, it is critical to run "partprobe /dev/sdc" to make Linux kernel aware of new partition.  Remove and re-insert the stick.  Then: "mount -t ext2 -o sync /dev/sdc1 /SDISK", which mounts the USB stick  with write-cache not enabled.  This has worked.  Since I am not rebooting the machine, it was critical to have "partprobe /dev/sdc".  I had to find that program in /NSYS/sbin, as it was not in the Linux rescue image sbin.  Many previous attempts resulted in scrambled file-systems, which appeared to be ok, but failed when "diff" or "md5sum" was used to verify the files.  It is critical that the "partprobe" be correctly run, otherwise the kernel will merrily use what it thinks is the disk "geometry" and block-tables, and you will get results that may look like they worked, but your data can be wildly wrong, and the directories corrupted.  Make sure to run "e2fsck /dev/sdc1"  and *also* check that large files are being copied correctly, with either diff or something like a hash-calc'r (I used "md5sum").  I transfered the source-tree for ImageMagick from NSYS, and built it on the new machine.  It ran and installed OK, so I am confident that the USB stick now works.  I've also migrated a bunch of *.tar.gz files - including Python, which I built from source (and enabled openssl by removing the comment-out "#"'s in the Makefile for it.  SSL has to be enabled for the "" Python prgm to be run to bootstrap "pip", which you need to get numpy, scipy, scikit, matplotlib and jupyter.  The "pip install jupyter" ran to completion, and I was able to draw a gaussian histogram of 1 million random numbers from within Jupyter Notebook + Firefox browser, so the new box seems to be working.

[Sept. 18, 2018] - problem here is I cannot reboot the broken NSYS box, may loose disk,and the partition changes I make to USB stick are not being seen by kernel.  Tried partprobe, hdparm and blockdev. CentOS/RHAT recommends "partx" (will try when current copy experiment ends).  Memo: The little NAND USB sticks are crap technology, I fear.  To get then to work, you mount with "-o sync" option.  A 500 mb file copy can take 40 minutes. This is a bit silly.

[Sept. 17, 2018 PM] - determined to crack this, as USB stick only thing I can get mounted on NSYS box.  Four tries.  This time, used only "parted" (partition editor), and wiped partition and redefined it at ext2 from parted prgm.  Ran e2fsck to confirm filesystem clean.  Been able to put all /HOME directories on first 32gb stick ok, except /home/android_dev, which has the android-sdk-linux, and many large (> 1 gb) system image files.  Looks like its working ok this time.  I've created on USB stick all files from /home except android_dev - copy now in progress.   Was able to migrate the Rails server stuff using a WD-BOOKdrive (but it's only 500 meg), but lots of /usr/local/src stuff remains that has not been copied from logical drive /root.  Complexity of modern software is high.  But I don't think many people are getting more value for the time-costs that are required.  The dollar-costs are low, but the time that is destroyed banging on a problem that was solved 50 years ago (ie. systemic backups of critical systems), seems high. (Lots of .java files...) Whistle

[Sept. 17, 2018 AM] - Curious.  The backup to the USB stick looked fine - except for the stuff deep in system-images in android-sdk-linux.  DIrectory "system-images" was empty (!)  When I tried to manually create files in it, all I got was "input/output error".  Running "e2fsck /dev/sdc1" produces thousands of inode errors.  Back to square one.  Curious that this is so very difficult. Looks like the journalling feature (that makes an ext2 into an ext3 filesystem)  damages the USB's operation.. Umounted, and running "e2fsck /dev/sdc1 -y" (fixing the dtime and clearing the compression bit on about 1 million nodes).  The fsck looks to have wiped out the ext3 journal, but the stick can be mounted as ext2, and all the files appear to be there and readable.  Spot-checks showed most ok.  Now running the long e2fsck.  

[Update:] - More research. Looks like using ext2 on USB for Linux partitions is the way to go. But you must ensure you "umount" right.  Confirmed I have a single, correct Linux (type 83) partition table on USB. Reformatted the stick with "mke2fs /dev/sdc1"  (no '-j' option). Then, mounted, copied some dirs, umounted, and checked stick with "e2fsck /dev/sdc1" and it reports clean. Doing this sequence for a few directories at a time.  Confirmed my Panda/sim stuff (a complete DecSystem20 emulator, running TOPS-20, with a 500mb RPO7 disk image) copied over fine, ran "diff" to confirm USB-stick and rescue-recovered /HOME dir version are isometric. Also, various .android device images appear to be copying successfully. 

[Sept. 16, 2018] - Backups.  Using USB sticks. In Linux.  Do this:  Stick the USB stick in the USB port on your Linux box (my NSYS box, running a rescue ver. of CentOS 6.6 from the CD). The 32 gb USB 2.0 stick I bought indicated compatible with Linux 2.6.31 and up. On CentOS 6.6, running "uname -a" reports kernel version 2.6.32-504", so we are good.

I've got /dev/mapper/vg_nsys-lv-home mounted as /HOME and the root logical volume mounted as /NSYS using Rescue Linux that booted from CD.   I wiped the FAT32 partition table from the USB stick as follows, to create new Linux partition. From Bash prompt on rescue Linux, running as root:  Run "dmesg" to confirm USB stick seen by Linux.  Mine was /dev/sdc.  So:  "fdisk /dev/sdc".  Then, c to toggle DOS-partion-mode off, n for new partition, select type=Linux (number 83), and choose 1 partition, with defaults (start cyl: 2, end cyl: 29000 were defaults for a 32 gb USB stick)).  Use w cmd to write partition table to the stick.

Then, make (ie. format) the ext3 filesystem.  (I used ext3 for backwards compatability). "mke2fs -j /dev/sdc1".  The -j means use ext3 journal stuff.  Label the USB stick with "e2label /dev/sdc1 MY_LABEL".  Run "blkid" and "df" to confirm.  Then, "mount /dev/sdc1 /SDISK" where SDISK is mount-point directory for the stick. Then, cd to the logical volume that you have mounted that has your needed files, and copy it all to the stick.  My stuff was in /HOME, so cd to that structure, and: "cp -v -p -r * /SDISK".  I have about 27 gigabytes of stuff (the Android dev environment is big, for example, as it has device images), so the copy will run for *hours*, but should fit on the stick.  (My first attempt failed, as I had not created proper partition table for Linux (ext3) on the USB stick.)  When it completes, check some big files with: "diff /HOME/bigfile /SDISK/bigfile -s" which will report files identical if they actually are.  Maybe confirm you can mount and get files from the stick also.  

[Update/Note: The above appeared to work fine, but some directories were missing.  And 'e2fsck /dev/sdc1" reported (after umount of the USB stick), produced a blizzard of bad inodes.

[Sept. 14, 2018] - And the new Rails 5.2 server is up and running, with my sqlite3 database "active-record-linked" to it via my little app and it's Ruby/Rails .erb and .rb programs. I can query and update my pages from my iPad, Winbox, MacBook, Android-tablets and Linux laptops or boxes.  Whew.  Thought it was history after the NSYS Maxtor went skanky.

Rails 5.2 is different than the Rails 4.x stuff, and getting it all running on a new box was more effort than expected.  Still have to tweak-in the ssl stuff.  Hint: if you've setup your experimental Ruby/Rails web-server, and the little guy is not visible on your LAN, here are three things to remember:

1): start Rails with it's listening binding to (not your box's IP #), and note that production images are slotted into ../public/assets, not ../app/assets/images (which will work in development mode, but *not* in production).  You can use "rake" to precompile .jpg's to tweak performance if you've got lots of pics..  

2): Make sure to open your tcp listening port in the iptables INPUT chain (the Linux complex firewall on CentOS) *before* the typical final "reject everything" entry in that chain. (Or your box will stay invisible to even your other LAN boxes).  Inspect the iptables firewall tcp listening ports with: "iptables --line -vnL", and after your change remember to run "service iptables save", or you changes will be lost when the server is shutdown. (A fine example provided by Silver Moon: )

3): Keep security tight (keep SELinux set to enforcing, don't open ports you don't need, monitor your /var/log/secure logfile, montor IP traffiic ...), so the script-kiddies don't mess with your box or use it for bitcoin mining.  (And forget bitcoin mining.  You can have more profitable fun using the public stock markets, and trading against those who run insurance companies and pension funds, even if the nanobot-algoboxes kick ya every now and then..)

Oh, also make sure to test your server by yanking out the plug while it is all running fine, and then rebooting, and restarting with "rails server -b -p 80 -e production" (or whatever config you have chosen), and make sure it all fires back up without issues.

And, uh... do lots of backups, eh?  Big Grin

[Sept. 13, 2018] - This has turned into a major project.  I have recovered the files on the NSYS box, using the "rescue" option on the CentOS 6.6 boot dvd.  Basically, I skipped directly to the shell, and managed to mount both logical volumes (root & home) from the RAM version of CentOS, and used an old WD Book-drive (which had an .ext4 file sys on it), mounted with "mount /dev/sdc1 /NSYS" to grab critical files.   /NSYS becomes the mount point for the /dev./mapper/<yattayatta>root logical volume.  And I recovered my Ruby/Rails webserver and database.  But now the hard part: Building the Rails server on a big ASUS box. Got most of it working, except the node.js stuff, as I can't compile with gcc 4.4, need clang 3.4 at least. 

So, trying to build clang++ and it is comically difficult.  Got LLVM built, but the build for clang is crapping out at [47%], unable to find include file 'llvm/Option/'.  (Here is the fix: edit the ../include/clang/Driver/ file, find the include for "llvm/Option/", and just stick the full path name in front of "llvm".)  The Ruby/Rails server started as a lab experiment to verify some technology for a client's request.  But it morphed into a useful internal tool & database that I want to keep using.  The world is running on this massive mountain of constantly changing codebags, and the cost of using this "free" software is high, as the coin you spend to use it is time.  But curiously, the continous improvement - the constant change - is analogous to the "rapid-prototyping" model I exploited using various APLs.  And the return to languages like Ruby and Python (interpreters, with compiled parts for speed), is exactly what APL delivered, and is now recognized as a best-practice.  The more things change, the more they stay the same. 

I learned "cmake" (again).  Too bad no one has solved the "moving library" problem yet.  Reminds me of Miyazaki's "Howl's Moving Castle".  All those lib files were here yesterday, but - **howl!** - today, they've just floated away!.  And your cmake/make build stuff won't run at all!  Here in 2018, software is not really much better, sadly.  (Except Python, maybe.  It seems to "just work".)   The "clang" build (failed at [95%] on the CXX linking of the clang executable) got fixed by tweaking the path to the LLVM StringSwitch.h file, deep in a program called "TransRetainReleaseDealloc.cpp"  (good name for a HeavyMetal band) in clang source dir ARCMigrate.

And then with a bit more machine-magic, I was able to nav to the Rails test dir, and "rails s" brought Ruby/Rails to life (see my new TV picture at screen right). 

[Sept. 12, 2018] - Ok, I don't hate Dracut.  Turns out you can catch the CentOS "Grub" boot screen, press "e", then downarrow to the vmlinuz image, press "e" again, and stick "rdshell" at the end of the vmlinuz boot line, press return, and then "b" to boot.  With a trashed "root" structure, this will half-boot the box, and drop you to a "Dracut" shell, which is basically a RAM filesystem running a tiny, crippled pre-boot Linux, used to sysgen your real image on the disk.  And from this little dwarf O/S, you can access your logical volume (with some "lvm" commands to Dracut, which can let you get to part of the physical disk.   Harald's witchcraft stuff is: (at the "dracut/#" prompt:)  "lvm vgscan" then "lvm vgchange -ay" and then "blkid".  This cryptic sequence will give you a list of your logical volumes on your physical disk.  And (WHOO HOO), you can actually "mount" some of these (maybe, depending on the damage to the disk's ext4 superblocks), and then mount a usb stick, and use "cp -v -p -r <mount-point-on-buggered-disk> <mount-point-on-USB-stick>"   - in my case, what is on the "/dev/mapper(yattayatta)_home" logical volume, which is bloody good (since it has an entire Android development environment for one of the APL apps I built..   I *cannot* mount the /dev/mapper..._root  lvm though, which is why the grief.  After mounting the source (as /HOME) on the damaged disk, and the target on the USB stick (as /BOOK/home_l2nsys), I can run a cmd from "dracut" which is basically:  "cp -v -p -r /HOME/android_dev  /BOOK/home_l2nsys "  which should copy the android development structure from the buggered Maxtor drive, to the USB stick, which can then be copied onto the new CentOS 7.4 box which has a secondary 1-TB drive.  (One issue - the USB stick is built with a vfat filesystem.  I may have to get a new one, and build it with ext4..  But at least I appear to be able to recover part of the pooched Maxtor disk.  It would be nice if Dracut would let me get to a network card, but I have found no google-doc that tells how to do that... [Later edit: May need to get a really big USB stick - android_dev has these userdata.img and systemdata.img files, each of which is basically an entire virtual "android device".  And each one seems to be many gigabytes!]  The "cp" cmd has been running now for *hours*!)

On a happier note, I successfully installed the Adobe Flash 31.x plugin on my CentOS-7.4 box's Firefox 52.2.  My Ruby/Rails webserver looks like it may be lost forever in the Nsys (6.6) box crash, but the experimental video-server stuff  (built using fine old Apache), is running ok, and I can now serve and render videos easily using Firefox 52 on the CentOS-7.4 box.  Also, copied the entire experimental video webserver to another, offline box. <big sigh...>

Moral of this sad story:  "Do yar fooking backups, lad! Otherwise, yar beein a lazy dumb-arse!"   In two words:  "Do backups!"  Shocked

 [Sept. 11, 2018] - The CentOS-7.4 box needed a fix.  Turns out the Firefox 52.2 ESR browser had wonky sound - Yootoob sort-of works, but most other websites - which have sound attached - would not render right.  ( A good test for complex rendering? url: which shows an animated fat guy with gold, falling and speaking...)  Some audio-visual stuff on the web is in a legal gray zone (some code has patents, but unclear if these are legally enforcable, based on both prior-art, and on precedents that support scientific inquiry - ie. patents are not supposed to stifle and impair science research).  You can fix audio-broken Firefox 52.2 by installing "ffmpeg", a video/audio converter, and it's related development libraries. Once these libs are installed and correctly pointed-to, Firefox runs right (almost). But you need to install epel, and config the rpmfusion repository to be active.  Then you can "yum install  ffmpeg ffmpeg-devel", and a truckload of code will be installed.  And you *need* it all, for stuff to work right.  I'll put the details somewhere soon.

[Sept. 9, 2018] - All the best information can usually be boiled down to two-word sentances.  Eg;  Fiat lux.  Jesus wept.  Keep going.  Don't stop.  Be good.  Stay strong.  Look out! - and that old Anglo-Saxon expression we all know and love (that starts with "F").  Best advice my second flying instructor gave me (as he was exiting the A/C, and I was about to do my first solo flight): "Don't crash." Wink  Given the continued expansion of US M1 (money supply - see latest chart in "Economics 2018" tab), I am thinking my market advice is similar: "Won't crash".

[Sept. 4, 2018] - Look at the "Stars and Space" tab, and see the false-colour image of the north-polar region of Saturn, with the amazing Hexagon-of-Hyper-Typhoons. Reminds me of Blue Oyster Cult, who explained: "It's the nexus of the crisis, the origin of Storms". 

[Aug. 31, 2018] -  Best Growing Year Ever - here at the farm.  Amazing weather has allowed three crops to be taken so far, with this Aug. 31st image at right, showing a 4th crop growing well.  Canada loves the "new climate", eh?  You don't get rich in agriculture, but it is a model which is well understood, and with a bit of care and attention to detail, can be run with happy efficiency, and generally successful outcomes.  As Lou Reed said:  "There are problems in these times.  But, hey!, none of them are mine!"  Big Grin

[Aug. 29, 2018] - Well, it's been a lovely summer, 3 crops so far off the farm, with a fourth growing like terraform-weeds.  Amazing - sometimes all the numbers line up;  Perfect growing weather this summer.   That M1 chart from St. Louis Fed is the big datapoint.  Can't fight that.  The micro-moves suggest hardball strength, and too much cash on the sidelines. As Paul Simon says: "Who am I to blow against the wind?" 

I've got the tech-stuff working well. DIsabled libvirtd on my video-player 6.6 box with "chkconfig libvirtd off".  And downloaded latest SQLite3 snapshot source for the research box, and built it from source. You can use it to sanitize the "moz_places" table in the Firefox <yattayatta>.default directory (filename: places.sqlite), by removing websites that it has tracked (F-Fox keeps a table of *every* website you have ever visited..).  Eg. if you are doing risky research (think: fuel for your Topaz thermoelectric gen-set), or latest developments in the PU-Thorium fuel cycle (hint: India has a working Thorium reactor), you might want to sanitize your browsing history. for security reasons.  Why Thorium?  It is one of those revolutionary technologies - fission power without too much dirty rads.. 

Thorium is actually quite magical. It's the "secret sauce" in vaccum tubes, painted on the glowing cathode heaters, that lowers the Langmuir work-function, so electrons literally "boil" off the hot wire with glee.  Like all great discoveries, it was a happy accident.   The GE tests at Schenectady labs in the late 'teens, were not even supposed to include the Thorium. It gives tubes their low-cost magical transconductance.  (I have this amazing re-gen radio circuit that uses a single 6.3 volt heater vacuum tube, running on only 40 volts DC.  It's like a piece of witchcraft, how well it works.)

The magic comes from element #90, Thorium, (or more accurately, thoria, the oxide of thorium) which wildly alters the "work-function" of the hot wire.  The single tube works as a hot diode.  See: Langmuir's 1923 article:   (this is only an abstract, sadly)

 For info on new Thorium fuel cycle in fission reactors: See: .

Returning to our sheep: For Windows boxes, just download the SQLite3 binaries & slot them in somewhere.  To remove a link from "moz_places":  Start SQLite3 with places.sqlite, then: "DELETE FROM moz_places where url like '%<put-url-here>%'; "  Don't forget the damn semi-colon at line-end, else SQLite3 prints three stupid dots.  (Why not print a semi-colon, guys?)  To list tables: ".tables".  To dump a table to a .CSV file: " .mode csv  / .headers on  / .out  myfilename.dmp / SELECT * FROM moz_places; / .exit ".   The / means press return.  The ".yatta" commands are SQLite commands. The original SQL stmts like SELECT and DELETE work.  Larry Ellison must be proud, eh?  SQL took over the world.  I guess the IBM folks must be pleased, too.

[Aug, 26, 2018] - I have a market forecast of the Dow-Jones Industrial Average falling to 18,950, by March of 2019.  [See the "Economics 2018" tab].  This number actually surprises me, but given the M1 data from the St. Louis Fed, coupled with 25% tarrifs, it actually looks quite possible.  It's a "back-of-the-envelope" calculation, but it looks both prudent and corrective.  See, the markets act, when the political folks won't or can't. (Eg. Oil prices.) We have to expect US zero-risk 10yr yields to move towards 4%, (from their almost 3% now), and stocks to offer 5.5% to 6% dividends (as they are much more risky).  Dow-Jones of 18,950 is only 26.5% below our Friday Aug. 24th close of 25,790.35, so we are just looking at a normal correction, really.  And it will be a correction that restores market economics, like oil going from $140/bbl to $40/bbl (since a lot of mid-east oil only costs $15 to $20/bbl to pump).  Markets do a good job of restoring sanity, when things get silly.

[Aug. 25, 2018] - The CentOS-7.4 box is pretty stable now, and is turning out to be useful and nice to use.  I now have all the apps from the CentOS 6.6 box running on the 64-bit CentOS-7.4 box.  (screen image to the right).   Got the C-Basic interpreter working with LOAD and SAVE cmds.  I want to add vectors to it (like APL uses.)  Been spinning thru astronomy stuff (need to get that Kepler planet study and see how they did it), and had a look at some of the research on Neptune (see the "Stars & Space" tab - I added the two most amazing Voyager-2 pics from that flyby.)  Neptune is cold, but it has seasons, and is very beautiful.    Because it's surface temperature is only about 60 Kelvin above absolute zero, it's white clouds are thought to rain tiny diamonds.  Also, check out the photo of the Martian south-polar C02 erruptions, which make "araneiform terrain" formations - which are now called  "The Spiders from Mars" (!).  Also, an MRO pic of an ancient, dry riverbed - basically a Martian "canal".  There apparently is a lot of water-ice on Mars.  So, as long as we take a good supply of gin and tonic, some O2, and a few air-tight tents, we should be OK, if we get on board an E.Musk-BFR. (Oh, right.  Gotta bring a Topaz thermo-electric gen-set, and a bag of Pu-pellets...)

[Aug. 24, 2018] - Here is the most useful one-line shell script I have ever written.  Took hours to find how to do this..  I just wanted a desktop icon on my Gnome 3.22 screen (the top-level GUI on CentOS 7.4) to start a terminal session, run a BASH shell-script that runs an on-screen-watchable batch-job (so I can monitor if job is working), and then ends, but leaves the xterm window open so I can confirm if things ran ok.  You just edit the exec= line in your Gnome .Desktop (active icons) directory to invoke this BASH script, which is two lines:

gnome-terminal --disable-factory --geometry 175x50 --zoom=1.5 --tab --title='Get the Data using Lynx' --command="bash -c 'cd /home/myuserid/lynx; /home/myuserid/lynx/; $SHELL'"
# --- you just enter 'exit' to close the Gnome xterm window, when done...

The above just cd's to the needed directory (~/lynx) and then runs the shell script there that does the work (  Call this file RunItNow, make it executable, and make an icon in /home/myuserid/Desktop that has the "exec=" line set to "exec=RunItNow" in the RunItNow.Desktop file, and you have a clickable icon that will pop open a terminal window, run your job (and report what it is doing into the just opened xterm window), and then leave the window open so you can spot-check if things ran right.  This works on CentOS 6.6, and on CentOS-7.4 with --disable-factory and --title parms removed, and --geometry and --zoom are not needed (gnome xterm window is a sane size).  Gnome 2.28 on CentOS 6.6 is more flexible, but Gnome 3.22 on CentOS 7.4 works, and flys like the wind, since the 7.4 box is a 64-bit Intel 4-core i3, versus the ancient production 32-bit workhorse that runs 6.6.  

Also note: If your CentOS 7.4 box is kacking .with ABRT's on failure to run the SELinux "setroubleshootd" python program, due to the two SELinux policy-files problem (you will have two policy files in /etc/selinux/targeted/policy - .30 and .31 - because of an upgrade bork), you can fix this by updating your SELinux core policy library and programs, with just an single "yum" command:  As root, run:  "yum update policycoreutils", the SELinux targeted policy stuff will be upgraded, and the "selinuxtroubleshootd" daemon will run again ok (and you will be back to seeing the "dac_read_search" errors pop-up on a regular basis due to the new, tighter, tougher and more secure Linux kernel. (See Dan Walsh's note below for explanation why.)

But, if you have  tweaked SELinux booleans, you will have to re-tweak, as your changes will *not* be preserved. (Bit-fiddle the SELinux booleans from cmd-line with:  "setsebool -P <the_boolean_to_change> <1 or 0>".  (Make sure to use "-P" option or the SELinux boolean changes will vanish when you shutdown the machine! (Why would "Lets throw away all this guy's boolean security edits!", be the default option, FFS?)   Inspect the SELinux booleans with "getsebool -a | less"  (which will use "less" to flip the result into a vi-session so you can page up and down and read the damn things).

If you are running CentOS-7.4 and are curious why you are getting "dac_read_search" AVC (access violation checks) from SELinux, Dan Walsh, explains why here in one of the best-written technical articles I have read in years:

 [Aug. 21, 2018] - I've gotten so annoyed with modern hyper-bloatware, that I am writing my own Basic interpreter in C - but with floating point, of course.  Project started as a hack to put a TinyBasic-like thing on an SBC (single-board computer), but the project has morphed to something I might use elsewhere, since I now have the floating-point stuff working, and have installed a POW function.  It also keeps me from over-trading. I now have a working prototype of what is basically a simple stack-machine, running on my home-built Linux-kernel, and also on a modern CentOS 7.4 Linux, an Apple MacBook (using the C-lang stuff that is gcc compatible), and on a tiny WIndows-8 notebook, using the latest MinGW stuff (which actually works surprisingly well). 

Plus, I put a 32/64-bit multi-lib version of Wine (WINdows Emulator) on my CentOS 7.4 box which really works quite spiffy.  I also put Lynx and my custom home-brewed database stuff on it, and it all works, to my amazement. So all my old Windows-based analytics stuff gets to fly like Richard Russel, on a CentOS 7.4 box that runs like a a 4-core Tibetian windhorse.  I have this math stuff that used to just grind, but on the Linux box now skips along like an XB-70 Valkyrie, and completes in less than 5 to 7 seconds.  Great - except that it says I should buy a doomsday hideout in New Zealand, as this bull market is so long in the tooth, with it's FANGs so seriously extended, that we are in real danger of a serious shredding, once we get another 50 bps uptick on the long bond yield.   

Or maybe all my numbers are wrong - or premature?  Maybe economics will be trumped by politics, and rates will be held low, at which point, (to extend the aviation analogy), we may need to grow little germanwings, lest we feel the love of cumulo-granite, when the US-$ finally makes it's phase-jump to one-to-one parity with the yen. 

"Gimme a beer, I need to sober up."  (I really heard someone say that once, way back when I was in school... the lads had been drinking rum, you see...).  The global economy seems to be saying the same thing, right now.  Gray Cardiff's chart from "Sound Advice" shows it nicely. (See it in the "Economics 2018" tab)

[Aug. 19, 2018] - Strange times, strange month.  Historically, this time of year has been a risky time for equity markets.  As folks return from their vacations, and we move towards harvest season, they typically harvest their gains from the stock market.  But there seems to be more happening.  JPMorgan's chart shows that economic storms often begin this month.

[Aug. 9, 2018] - Reading about Martian glacial lakes of water, under the polar ice caps. Excellent.  Who cares if they have bugs?  Main thing is, Mars has a *lot* of water.  This is good, as it makes some terra-forming possible.  With a few BFR's, we can let the loading begin. Our species history is three steps forward, two or three back.  I'm researching long-wave history and more Bitcoin stuff.   Bitcoin is either a complete bogus scam, or perhaps a brilliant solution to money-starvation and associated falling real wage problem that is limiting global growth.  Still not sure.  

[Aug. 7, 2018] - Tonite, we watched a mother deer try to lure a coyote that was stalking her and her fawn, out into the middle of the alfalfa field, away from her fawn.  Her efforts were unsuccessful, she returned to the fawn, and she and it "high-tailed" it down the field, and hopefully away from the preditor(s).  Minutes later, I saw the coyote walking carefully down the side of the field.  Think life is tough in the big city?  The city is civilization.  Nature, which is so chock full of preditors, is much more challenging. It's survival of the fastest - both of foot and of rate of maturation.  Lesson: Move fast, get big quick, or you will die soon.  

[Aug. 2, 2018] - On July 25th, 2018, SpaceX launched the Falcon-9/Block-5 SpaceX rocket  successfully from Vandenburg Air Force Base, and delivered 10 more Iridium communications satellites to orbit.  Bravo SpaceX!  The future is being created by these guys.

[July 23, 2018] - Creative Destruction !     We need to respect the virtue and necessity of destructive Schumpeterian "storms".   Modern "Safety Nazism" (a term first used by the Auto industry in the 1970's), has become the dominate objective of Government policy.   Is this wise?  We need a new understanding of what Schumpeter and others have said. An aggressive reduction in the scale, extent and cost of delivering "government", would probably lead to both a somewhat more dangerous world, but also one in which there is *much* greater opportunity and self-stimulating economic growth.  The current expansion underway in USA (which contrasts to the stagnation and drift in hyper-social-welfare oriented Canada, for example), seems to bear out the truth of the assertion that "Less is More", when it comes to the toxicity of constant expansion of the  non-self-financed "Government" sector.  In Canada, despite our skills and factor endowments, we still run poorer, with typically double the rate of unemployment and half the rate of economic growth, than our neighbours to the south enjoy.  This needs to change.  We constrain Schumpeter's "gales", and we protect and restrict much human action here as well.  (Except drug abuse, apparently.)   This is perhaps an unwise strategy.   Cool

[July 19, 2018] - Visited a friend who has designed and built - from scratch - a Z80-based 8-bit computer. He used KiCAD, and had the circuit board etched and drilled in China, bought all the parts, and soldered it all together. He then wrote his own monitor program in 8-bit assembler, loads it thru a serial port via a terminal emulator running on his MacBook, which then brings his Z80 machine to life. I am in awe.  He gets a ">" prompt on the screen, and can load and run programs on the bare-metal.  No FPU and no mass-storage, but he can control an old-style joystick, and make it drive lights on a 2-digit LED display.  I have a physical, hacked-together "DOSbox" on my basement workbench, which runs MS-DOS 5.0, and can even read and play a CD-disk, but his accomplishment is far beyond my recycle-bin hackery.  The Z80 chip is actually quite expensive ($10 USD), as it is so old, it is almost antique-status.  But it is the tractability of the device he has built that makes it so wonderful.  There is no hidden "black-boxery" anywhere. The "Tiny" monitor program he wrote is a thing of austere beauty, written in 8-bit assembler.  I suppose he must also have written a translator that transforms (assembles? compiles?) the 8-bit machine-code into actual hex digits, and a simple packet-driver that is first loaded to bring the serial-port to life.  This is much more cool than even a Raspberry-Pi or an Arduino (which themselves are very cool things..)

[July 18, 2018] - Rome did not "fall" because Alaric sacked the city, or because the Rhine was left undefended and froze over.  It ended because it became a better deal to defend one's own borders, and make and enforce one's own laws, than it was to accept Roman civil authority & arbitrary Roman taxes.  Rome didn't fall, it's business-model just broke.  Will the EU suffer a similar fate?  

[July 13-14, 2018] - It just keeps getting weirder -too bad Hunter Thompson is not still with us.  The Dem's post-election ratboys have poofed up 12 Ruski badguys for CNN to talk about, and so Chuck Schumer wants Trump to cancel his Helsinki meeting with Putin. Of course, Trump will meet Putin.  Schumer should resign.

[July 12, 2018] - The "China-USA Trade War!" talk is just theatre. It's like a Kabuki play - lots of sound and fury and big foot-stomping formal positions - but it is for the fans at home.  If China wants to be the worlds richest nation, they will just have to learn to play fair, and not try to cheat all the time.  They should learn the wisdom of inaction - which historically, they were masters at.   Tarrifs are just a tax on your own people, and all tax hurts prosperity & restricts growth.  Everyone knows this, East, West, North & South. Picture to the right is my "Summer Office", where I am working building a road with a hand-shovel and a chain saw!  :D  (Moved the pretty sunset pics to "FeeSimple" section..)

[July 4, 2018] - Happy 4th July, USA!  Met a grad. from Waterloo on weekend who is on his way to new job @ Google in Californistan.  He has promised a .pdf on how to read in raw data for use in Tensorflow.  Would like to replicate my APL/Xerion BJV AI-Forecaster into Tensorflow - have the box, and built Tensorflow from source, but there is no actual usage doc. for it that I have found yet.  Asked on msg-boards & Stackbarf, but still no joy.  Oil broke $75 USD/bbl yesterday, before falling back.  Nice pattern, fairly clear signals now.  But we may need to adjust to a very different future world than what was expected. 

The USA-China trade imbalance is extreme and unsustainable. A broken and bankrupted USA dominated by a few Leftist-billionaires, and peopled by a great sea of low-paid/low-skilled no-future temp-workers & violent Spanish gangsters, is not what folks want to see as their future.  Change will have to come, no question.  Tarrifs and "trade-wars" are bad, but the real damage that has been done to the USA, has come from it's own Leftists.  Folks there know this.  That's why the US markets can keep rising, despite all the "tarrif-tantrums" happening.  Corrective measures are at least beginning to be taken, and real improvements are being made.  The world needs to throw away all the permissive bullsh/t and compassion-sickness nonsense of the 20th century, and get hard and disciplined again - especially here in North Am, where the Left has done so much profound economic damage - just as it did in Russia and Eastern Europe.  But this is not the "End of History..."  It is finally looking like it might be a real Beginning.   Markets will swing violently, but will trend higher, I suspect.

[June 26, 2018] - Above is a "Group of Seven" style image from my northern cottage.  Tom Thomson and other "Group of 7" Canadian artists painted these amazing images, with bright, garish colours - they were ridiculed and laughed at - but the image at right was taken with a Huawei cellphone, with *no* colour editing or post-processing.   The natural world can evidence amazing beauty. (See "FeeSimple" section now for sunset pics.)

[June 20, 2018] - There are these fake pictures of a "black hole eating a star" in all the science stories, based on an article in Science on "tidal disruption events" - basically the formation of energy-emitting accretion disks around black-holes.   Click on "Stars & Space" to see the actual, true "image" data (which is actually quite exciting and interesting...).

[June 4, 2018] - G7 is being L7.  World will have to get used to a new American model, where instead of having a "Lecturer-in-Chief", they have a traditional "Leader". I am not a Trump fan, but it is hard to argue with a 3.8% unemployment rate.   What is happening is exactly what the honest economists said would happen, if the too-high US corp. tax rate was reduced to something reasonable.  But this labour-mkt improvement is not happening during a runaway bubble (like Y2K), so it might be a lot more sustainable.  USA can annoy with its arrogance, but the place is a-rockin', and they don't need anyone to come-a knockin'!  The zietgeist is ugly, but the model is working.  Tough to argue against success.  But it has made me rethink my image of the Visigoth leader, Alaric.  Our histories were written by Christian Latinists.  What if Alaric was in actual fact, an honourable and wise man, reacting against a cruel, despotic and corrupt regime that *needed* to be brought down?  In rotulis de Rota Fortunae.  Mutare est semper.

[May 31, 2018] - Crazy-nice weather.  Some high-winds and minor rain from tropical storm Alberto, but rain was needed. Mkt volatility is also crazy, directly a result of competing AI's.  Best book I ever read was "Reminiscences of a Stock Operator", by Edwin LeFevre (the story of Jesse Livermore using pseudonym "Larry Livingston"). You can read that book four times, and when you read it the fifth time, you can still learn something new. The market action we are seeing now, is not at all new.  And the idiocy of modern government policy is also not new.  I have been researching cryptocurrencies, trying to answer the question: "Could the future see a cryptocurrency become the global reserve currency?"

[May 28, 2018] - Warm summer is finally here.  Boat in water, sunset on lake.  WTI Oil down to $66.29 USD/bbl (down $1.59 today, ie -2.34%).  Technology is working.  Life is fine.  Happy Memorial Day, US friends, family and clients!  Be good to those you love.

[May 22, 2018] - Weather stays cold, market stays hot. WTI Oil over $72/bbl (US), Cdn $ creeping north. Banks strong (& getting stronger).  Even uranium mining is heating up.

[May 17, 2018] - As my research suggested would occur, we are now seeing rising yields on long bonds synchronizing with a rising stock market, and rising asset values. The "Triskellion Trillium" is my latest discovery among the local alleles.

[May 15, 2018] - Tired, but I have the farm in shape.  Must return to the BJV forecaster.  My marketplace research keeps turning up surprising results.  The belief-sets held in the minds of many modern younger folks are scary and very dangerous.  A few years back, the Leftists captured the majority mind-share of the kids in schools, and we find now a curious,  toxic mixture of half-truthed Socialist dis-info seems to have taken deep hold - even in smart folks who should be able to see thru the basic lies of left-wing thinking.  The Left wants to "Tax and Rend", and perhaps that is why I chose to self-rusticate. 

Our governments are unable to take on nation-critical future-focused projects, and the bright young private-sector folks who should be "Building the Future" seem to be either paralyzed by bogus political dogma, or deluded by visions of their own personal greatness - unable to recognize any flaws in their own thinking, only in others.   They have zero capacity for self-reflection.  Such artificial self-confidence pre-programs wisdom-avoidance & ensures certain self-destruction.  I keep seeing this model, over and over.  And I keep seeing the train-wrecks and planes-flown-into-the-ground that result from this increasingly common lack-of-self-honesty. 

Is Humanity condemned to repeat (and now amplify) the same errors with each new generation?  Must the majority of both public-sector and private-sector projects always be exploding disasters, like the cheap-thrills endings of James Bond films?  The only place I see serious beauty, quality and genius-level skills being deployed these days, is in weaponry, thievery and modern surveillance systems.  :/   My BJV Forecaster thing seems to work, and yet I fear it could just be an exercise in teratology, if the Governmentalist-gangsters manage to destroy all future opportunity. 

My farm has a unique allele of Trillium, which I have named the "Blood Trillium", as its petals are blood-red, and slightly deformed.  It is dramatically beautiful. Perhaps it is also a portent of the terrible conflict that I see ahead of us all.

[May 6, 2018] - Completed cleanup from ice-storm damage, and then we had major windstorm which knocked over numerous old-growth trees in forest. Lost power, ran farm on generator.  I was in the forest, using chainsaw to cut down ice-storm damaged tree, as the rain began, line-winds ran up to 120 kph+ in gusts, and then took down trees around me.  Standing there with my little white hard-hat, amid ropes and winches, I felt a tad foolish.  I managed to get my target tree dropped safely without it destroying my power line (the critical objective!).  It is different actually being in the middle of a storm versus looking out of a window at it.  One gets a different perspective. 

[Apr. 28, 2018] - My IP advisors (unofficial so far, just chats..) say I should hush up about what I have been doing..  The future may be dark.   Hoping to investigate the Kepler research (which uses TensorFlow).  The open-source approach is best, in both science and software.

[Apr. 23, 2018] - Happy St. Georges Day.  Research on Bayesian prob. stuff. [Posterier odds = Original odds x Likelyhood ratio].  But you can also address Base Rate Falicy using Frequentist approach (my preferrence); viz: for large sample:  Pr[Event] = true observed # events / (# of false positives in sample + true observed events), and get the same smaller - ie. correct - probabilty.  But if data is faked, your results are wrong. (Eg. "Global Warming" "science", medical errors, gov't statistics from corrupt regimes).  But faked data can make the fakers rich, powerful, and *very* dangerous (Eg. the lies of religion => religious wars, cheating in poker => gun fights, State propaganda => false-flag attacks, then full war).  We must seek accuracy, produce quality, and be effective in our defence of what is true. Could AI-augmented analysis assist in the detection of deception?

[Apr. 21, 2018] - Learned how pirate websites are generating bogus click-views for legitimate adverts - but the adverts are never actually seen - the pirate sites make it appear the ads were served up to page-viewers, but actually the advert is hidden - the person looking at the site hosting stolen material does not even see the ad, but the site owner gets revenue as if the ad was really viewed.  Double-ended fraud:  Cloudflare to server in Ukraine to Swedish "Bullet-proofer" server-mgmt site. Little can be trusted now. 

[Apr. 19, 2018] - Throughout all of human history, the supreme skill and creative excellence of the artificer's art has alway been refected in devices such as these. (viz. the tsuba for a Japanese katana, Edo Period, from collection in the Museum of Modern Art in New York.)  "I'm guided by the beauty of our weapons..." - Leonard Cohen.   

[Apr. 14, 2018] - Ice-Storm Saturday... We are not just fooled by randomness, we can also be saved or destroyed by it, too.   Today, high winds, ice-pellets, freezing-rain, broken tree branches.  Cold, nasty, wet, gray & windy.  Spring in the roaring-40's...   Matches the stock-market action. 

In the 1960's, (before my time, really), the NASA folks were using Fast-Fourier-Transforms + comb-filters to process the radio signals from the boys on Luna.  Clever folks bought IBM 360/44's, and used their fast-math & accurate precision ability to run similar FFTs+comb filters to monitor, extrapolate and trade stock market action.  As a tiny child, first articles I read on computers & stock trading were on this topic.  The IBM 360 was pure magic - like having a machine-gun, when your enemies were howling nutters waving swords and screaming nonsense.  You could just carefully and accurately dispatch them *all*, and then leave, and be home in time for tea and scones, and a relaxing dinner.  The IBM 360 was introduced on April 7th, 1964. You can read the original brochure at the IBM site:     Note the Model 44 use-cases: Missile telemetry, real-time nuclear-reactor control, and digitization of electrocardiogram data for numerical processing (ie. medical diagnosis).  There is nothing we are doing with neural networks that was not already being done in the 1960's.

And the IBM 360/44 ran APL 360, a work of creative genius, on par with the development of written language.  Check out this 1967 document on APL-360.  (Look, make sure you download the APL385 font, which you can get from the UK Vector site here:   It is goofy-easy.  Just download the font, put it somewhere, navigate to your Windows Font directory, click the "Install New Font" and then just select the APL385.ttf file you downloaded and it will be installed. Then your browser will automagically show the APL 360 doc with the correct APL font.  Easy-peasy. The font-download URL is also at the bottom of the APL360 article.) Original IBM research report, RC-1922, from October 16, 1967, on "APL/360 Terminal System":

This article is only a few pages long, and is maybe a 5-minute read for anyone with any sort of technical, scientific, or engineering background.  You can download any of the GEMESYS APL's for Android (see way down below on this page for details), and try APL for yourself on an Android tablet.  All GEMESYS Android apps are freeware for research and learning, no in-app adverts or tracking.   I also have an APL running on the iPad, but Apple restricts any programming language or interpreter from being offered in their company store.   Are they monopolists who should be taken down by the anti-trust folks?.  Big Grin

[Apr. 13, 2018] - Latest results, GEMESYS Market-AI running on new CentOS box. Maybe it works...  (See screen-image of Xerion results, just below this text...)

[Apr. 12, 2018] - Did a little research project on "cui bono internetius" - who benefits from the internet?  Turns out the benefits flow mostly to governments, in the form of expanded social control and monitoring, and then to marginalized terrorists and other extreme-politics entities, followed lastly by a handfull of very large corporate groups (think: Google, Apple, Facebook, Amazon, Netflix - the "GAFAN"), that have reached scale.  Everyone else faces higher costs, it turns out, or a degraded revenue stream.  This was not how it was supposed to work, if you recall.  It appears the future has been hacked by the state and it's agents.  <sigh>  Rather like it always has been, no?   Hey, we had a {war/election/new-innovation}!  Who {won/won/got-rich}?  Why, the Government, of course!   What did the people get?  The bill...  Wink   My image of the GAFAN looks like Godzilla, and we small guys are Tokyo! Big Grin

[Apr. 8, 2018] - Happy Hanamatsuri. (Flower Festival)   The weather here is cold.  Bright morning sunshine after a -9 celsius nighttime, (see picture at right - April 8, 2018, Temp: -5 celsius ) and then another hard-core blizzard with heavy snow - on April 8th!  I distrust the "science" that asserts "global warming".  I see exactly zero evidence of such a process at work where I live.   But I see lots of evidence of edge-condition statistical volatility.  Like Mr. Taleb says, we are often "fooled by randomness".   The trick is to try not to be.  No flowers here (and minus 7 expected tonite).  But the technology is working very well...  Cool

[Apr. 6, 2018] - Video-feed work.  Compiled custom MPlayer, Mencoder & FFmpeg from source. Developed methods to use MPlayer to monitor real-time video feed, & Mencoder to record real-time video stream to .AVI file - both sound and video. Non-trivial exercise, but Linux has v4l-2 (video for Linux, ver.2 (ver.1 nfg)) drivers for TV-7133 card and video works - dTV converter output (b-cast channel 3) into video-card allows digital video to be processed, once all codecs loaded into MPlayer (quite a non-trivial task - but now working..)  Can an NN process a real-time video feed?

[Apr. 3. 2018] - On right is screen-image of UTS-Xerion, running net MNnet3040 on CentOS 6.6 Linux, with Unit-Display showing target forecast as of Mar. 29/2018 vectors.

[Apr. 2, 2018] - (Apologies if this site is loading slow - really need to re-factor.)  I built Xerion from source on a much more modern box - CentOS 6.6.  Fast, dual-processor box (Pentium 4, 3.00 ghz, ATI prototype graphics card, hi-res digitally-connected monitor).  Nice images, big tower, runs warm. This turned out to be a big exercise (I documented it, finally). Running gcc kept giving me screen-blizzard of unresolved references - until I disabled the system-current version of the shared library that was in /usr/lib, so the libtcl.a static lib that was in /usr/local/lib could be found & used - the Tcl 7.3 one that Xerion needs! (Duh! or maybe a Homer-Simpson "Dohh!").   This lets me jump OS from Fedora-9 (circa 2008) to CentOS 6.6 (circa 2014).  Not bleeding edge, but current and stable.  (Stability is *critical* requirement.  Anyone listening? Linus?)  I have TensorFlow 1.4 running on a CentOS-7.4 box, running Linux kernel 4.14 (original kernel 3.x version had no sound!) - but stability is not great.  And TensorFlow on the Macbook running Yosemite doesn't do its IEEE 754 floating-point routines right - seems to be an Apple O/S issue.  Linux kernel 2.6.32-504 on a Pentium 32-bit box with Xerion can be trusted to be stable - it just works unless the power fails - and it does it's math correctly.  (I know, because I have checked with Kahan's UCB floating-point tests, compiled from souce.)

So, on my new box, I built and tested a bunch of different N-nets.  Confirmed that something *different* may happening now in markets.  World of 1995 to 2015-7 (which included some pretty wild events, right? - the 1998 Russian default, the 2001 bubbleburst and the 2008 US Housing-crisis/Lehman-meltdown) is curiously different than world of mid 2017 to 2018/now.  Boolean jump-vecs show it.  We are getting these long runs of high serial-autocorrelation - some are  6, 7 or 8 days  (not looking at next day, but at plus 4 or 5 days..).  Some stocks are breaking, and then the break just auto-correlates.  Looks like 1930's, despite the big slushpile of cash that is slopping around.  Really looks different.  Something is wrong with volatility?  Or are all the trend-following algorithms self-synchronizing?   (That is my working hypothesis)  (FD: I am not a fan of options - the vega, gamma and such..., but I bet something is showing up there.)  Something is different with this market - and the quick 500 point drop on the DJIA this morning seems to confirm my fears.  Still long, but hurtin' a bit (like those old cowboy songs...).  Just when I get my tech working, the world looks like it is going to blow the side off, like the old Apollo-13 ship.  This isn't just novichok-assaulted retired spys and Asian trade-wars.. this might be something more systemic...  Reminds me of when you smell burning electrical insulation on the flight-deck at altitude over water... makes the hair stand up on the back of your neck.  Are we in for a little burst of "fear" to match the recent "greed"??  

TRASER model is long, but Xerion BJV-NNs say: "Huh? No signal..."  But Xerion is awesome - I have a 172 boolean jump-vector test dataset, and with a slightly bigger hiddenlayer, I was able to train the net down to 100% accuracy (like the old XOR test).  Of course, that net is useless for prediction, but it shows the power of the Xerion simulator (I think it must be the grandfather of TensorFlow - all the same stuff seems to be inside it.)  It is interesting how much better backpropagation works with conjugate-gradient direction selection, combined with a line-search, rather than just using a fixed-step change with various epsilon values.  And I  discovered a parameter to tweak the verbosity level during training, from 0 to 3. (More info shown as backprop/training runs..)    And the "Unit Display" is really cool - you can visually inspect each day of your dataset, and see the TANH node activation values as generated Hinton diagrams.  This technology is quite magical.  And I think I understand why convolutional networks can peform even better.  Widespread use of this AI technology is going to change many things.  Or maybe it will kill us all, eh?  :D 

[Mar. 30, 2018] - Extended third check/verify dataset to Mar. 29 data.  Gives 23 obs. days, with coefficient of accuracy of 52.17%.  (12 right out of 23 obs.).  By the width of hair, but on the right side of the line.  The Xerion-AI (a BJV-NN) can create an indicator worth looking at.  Not perfect, but it's a working AI Augmenter for market action. (Image at right: Xerion running MarketNet on Linux-box)

[Mar. 27, 2018] - Interesting Xerion results to report:  Used different price datasets to assemble boolean jump vectors, made the jump-vector filter delta smaller, and ran NN which was previously trained on two epochs - original Feb 08 1995  to May 26 2017, 5614 observed price-days, then trained on second epoch which was originally test&check dataset (June 8 2017 to Feb 12 2018), 172 price days. Got 68.41% accuracy on main dataset,  58.14% accuracy on smaller dataset.  Ran against 3rd dataset from Feb. 13 to Mar. 23rd, 2018 (20 days), and got exactly 10 for 20 (50.00% accuracy). Noteworthy is that my BJV-NN successfully forecast the unusual  downspike in price of target security prior to its going ex-dividend (an event that generally does *not* usually occur).  Had I taken the trade, (FD: I didn't = still holding), the delta gain to me would have been approx. $10K).  Market is in an unusual flux currently, given trade-war talks, and the (maybe Mossad?) poisoning of the retired Russian spy in England.  If you wanted to damage Russia at low cost, this was clever & sadly effective  tradecraft.   Strange times..  I read on the FB TensorflowML page, about some fellow who built an Etherium-based game, and made $500,000 on his first day of offering the game in the wild.  Seems a little extreme..

[Mar. 26, 2018] - Weather has become sunny, but cold.  Rain is forecast.  Reviewing TensorFLow tutorial for MNIST stuff.  (Digit and face recognition works.  But the code environment is a bit of a  hot mess...  or perhaps I am being too critical.)    Xerion requires I down-convert Tck/Tk to 7.3, which is a pain on a new machine (the Black Asus, which now has Lynx, WINE and the TSM-Database all running nicely..)  But the NN is the AI edge...

[Mar. 16, 2018] - We're back into a deep-freeze which suggests (to those who follow trends and like bell-curves) that spring remains far in the future, which of course is not the case.  The high-volatility weather, combined with the relentless certainty of the seasonal changes, is perhaps why farmers can become good investors. 

[Mar. 01, 2018] - Crazy busy outside with farm equipment. Like software, the grief or the success of a system, generally driven by two things:  the ease-of-use of the interface, and the overall system reliability.

[Feb. 21, 2018] -  I found I could train the Xerion Jumpvector AI down to 91% accuracy, but could not get better than 45% accuracy on the post-training test dataset. Obviously, as I train tighter on the training data, I push the training first towards the signal, then away from it, as I train down to noise.  I ran 13 separate train-then-test exercises - creating 13 different network weight-matricies.  They all perform pretty much the same. So I trained part-way (to about 72% accuracy) on the main training data - then loaded in the test data, and trained just a little part-way with it, then re-loaded the main training data, and trained a bit more.  I stopped the training early (which was using an effcient conjugate gradient direction selection, with a line-search step method), so you have to watch the gradient vector carefully.  This gives me semi-final results for the 13th network weight matrix of 68.41% accuracy on the primary training data (now running from Feb. 1995 to June 2017), and a level of 58.14% accuracy on the test data set.  If this result holds going forward, then I might have something that can directly assist trade selection.  The true signal is very weak and fraught with noise, but it looks like it might be there.  

[Feb. 20, 2018] - Revisited the whole Xerion-AI project:  With my third version of network weights, for the 4500 record boolean jump-vector dataset (which only trains to 87 % accuracy), I can now get 41.17% prediction accuracy on my test boolean jumpvec dataset, for July 27 to Feb. 15, 2018.  That is a staggering improvement from the 24% accuracy I was consistantly showing.  If I can push the accuracy up above the 50% level, then any casino gambler will tell you that you might just have an edge, if one can manage the betting very carefully.  See info below for this date for more details.

[Feb. 18-19, 2018] - Updated the Xerion NN-AI for dataset from June 27, 2017 to Feb. 14, 2018.  Accuracy co-efficient is 23.4 percent - random results with a couple of degrees of freedom, I am assuming.  (Two purely random variables, each with a 50% chance, so we get a consistant percentage of correct predictions around 25 percent.  Not much use, I fear.)  See third screenshot from top.  Experimenting with TensorFLow.  Built the MNIST stuff, loaded sample MNIST image datasets, setup simple network, trained it, and got accuracy of 90.18 %. 

[Feb. 6-17, 2018] - The V-bottom is back in fashion, yes?   Mkt action typical of tightly-coupled feedback(feedforward?) self-reference system.   Probably computers should not be connected to trading feeds.   No benefit to real investors and companies using the stock market to raise funds.  But the pros have to have some way to make money off of the farmers, clearly.   Too bad the Chinese investors got nuked by the SEC for trying to acquire the tiny Chicago Stock Exchange.  (Anyone even know there was a "Chicago Stock Exchange"?).  Funny also how the Russians are being painted as the bad guys again.  Most of the world disliked Hillary Clinton (if they had any memory of the Clinton lawbreaking projects back in the old days.  Remember her "Cattle Futures" trading?)  Curious to see the latest American witchunt play out. 

Elon Musk & SpaceX Team had a successful launch of the Falcon-Heavy multi-rocket.  Bravo to them! (See "ExoPlanets & Space" for image of "Starman" driving to Mars in Elon's red Tesla Roadster!   

[Feb. 05, 2018 ] - Ugly mkt action provides opportunity for new positioning.  Been focused on mkt action.  Curious how all the news services completely miss the trigger events, and interview analysts who prattle about US Fed. Reserve and interest rate delta's and 10 Yr US Treasuries at 2.85%.  Ho hum.  No new news there.  Fully discounted, of course.  It is the Bitcoin/NIM meltdown, and the $500 million theft from Coinbase in Japan which is driving this.  (Bitcoin Investment Trust thing in Cda did a 91 for 1 split recently.  Perfect indictor of a market floating on foam and fluff-puff.)  Bitcoin and other cryptocurrencies is a great idea for a transactional tool - but as an investment, it is just another mechanism to prevent compound growth from operating long-term.  Important to keep the world poor and hungry or no one will go to work, will they?   All investment *schemes* must vaporize wealth, or within 300 years every family would have billions of dollars, wouldn't they?   Studying CNN's (convolutional neural networks, from Stanford course:  CS231N

[Jan. 30, 2018] - I was getting different numbers on the MacBook (MacOS 10.10.5) and the Linux boxes (CentOS-7.4, with current Linux kernel) in Python+TensorFlow simulation.  Downloaded & converted floating-point test suite: UCBTest,  (UCB=Univ. Calif. Berkeley, Sun Microsystems & W. Kahan, early 1990's). Also built "chkprec.c" to tweak precision control-word in Intel (for 32-bit chips), and to my surprise, the program also works to set precision on 64-bit Core-i3 Intel SSE2 chips, if running Linux (CentOS-7.4).  Resolved TensorFlow simulation problems on MacBook, by re-coding program to use *all* 64-bit (float64 instead of float32) floating point variables.

Key Results:  [Jan. 04-11, 2018] -  Successful build of TensorFlow 1.4.1 from source, on Macbook (Yosemite, MacOS 10.10.5), using Xcode-7.2.1, with Bazel 0.9.0.  [See Research Log below the pictures] - AND successful install of Python "wheel" file into Python, (built using the Tensorflow script file: build_pip_package).  Crazy simple: Just rename or copy the newly-built Python wheel-file name from "tensorflow ... cp27-cp27m-macosx_10_4_x86_64.whl" to "tensorflow-1.4.1-py2-none-any.whl" and it pip installs it fine, no problem.  Ran tests to confirm binary TF and newly-built-from-source TF run the same.  See full details of how to build at closed issue:

Strange issue.  Getting different results when running Laplace PDE simulation example on Linux and Macbook.  Mac version evolves to big positive numbers everywhere, whereas Linux version evolves to big positive and negative numbers.  With the sim tensor np.clip-ed to 0-255, the Mac version evolves to blank white image.  On a Mac running Sierra, my Linux box, and an Ubuntu Linux box, the sim evolves to something very similar to the image at right.  Unsure which is correct, doing tests.  [Update: Linux was correct, as usual.]

What's Here...

  • Picture above on right:     Originally, a hybrid-image created with Tensorflow 1.4.0 on CentOS-7.4 Linux box, later, an image of the original 1954 Godzilla, and now a tsuba (Japanese sword hand-guard) from Edo period, now at MoMA in NYC.  This tsuba is from a high-quality weapon, and demonstrates the maker's skill and technological level.  A fine weapon is an example of highly-effective technological augmentation, which is what we must strive to make AI provide.
  • Images One and Two:  Xerion Neural Network simulator, running basic AI software against boolean delta (change) vector training sets.  Picture One is recent result, running on new CentOS Linux box.  (Xerion was originally built as a research tool to run on Unix boxes from Sun Microsystems, back in mid-1990's).  I hacked it to run on Fedora and CentOS 6.6 Linux.  Uses Tcl/Tk (with object-oriented extensions) to setup and inspect the training and runable datasets, and load/save the network weights.)
  • Picture Three:          Experiments with Laplace image-generation program in TensorFlow 1.14 on Linux and Macbook.
  • Picture Four:            Showing results of Python/IPython+Jupyter install and configure with various key packages, including Tcl/Tk (for use with networkx, matplotlib, pillow, etc.).  Works very well.  Real work can be done.
  • Pictures 5 to 10:       Images from the Xerion project in 2017.  The Xerion product is a neural-network framework from the late 1990's, developed by Dr. Hinton's team at Univeristy of Toronto.  I used it here to build and train a neural network to predict the expected direction of market prices, based on boolean-encoded slices of price data for several securities and economic series, sliced also across time.  The process works, but initial results suggested (again) that recent past cannot say much about the near future.  Recent results, with different inputs, are showing more promise. 
  • Next section - the AI Research-Blog:  Field Notes on AI from Lorcalon Farm - where I posted a daily log of what I was doing, and the results (or lack thereof!) that I had been getting.  Did some experiments with TensorFlow 1.4.1, and got it to build successfully from source on my Macbook.  But Xerion is stable, and I can get results quickly with it. (And it does its calculations correctly.)
  • Details of how I got Sharp APL to run on the iPad.  (Uses my hacky version of DOSbox, available on Android, as "gDOSbox" at the Google Play Store). I've put sAPL up as freeware, at the Google PlayStore, for modern Android Tablets.
  • GEMESYS Android Apps - gDOSbox, GNUPlot37, SharpAPL, WatcomAPL, IBM's TryAPL2 and STSC(Manugistics) APLpc - all available as no-cost apps from Google Play Store. No tracking code or in-app selling attempts.  Research results to see if it was possible.  The gDOSbox (and the GNUplot37 and the APL's) do their math correctly.
  • Pictures of versions of background work related to "AI Helper Apps" - Various useful tools I've built to run charting, number-processing and image-generation on the iPad and Android tablets. I first started building this stuff on the Blackberry Playbook, and still have a couple of Playbooks that run Market Price Analysis software similar to what is shown in these images.  (Internals are built using Manugistics APL)

This website is a work-in-progress.  Hope it is useful as a learning resource.  I will attempt to clean it up a bit and re-factor things (it's a bloated mess, now.. but I am telling the truth as I find it and see it...) .  Scroll down to see what is described above.

The whole thing started as the result of technology/investment assessment I did on a small company looking to raise a second round of financing.  I got brought in by the investor group to provide a quick independent assessment.  (The company's tech was fine, and they got their money, btw).  As part of initial discussion (where everyone sniffs everyone's tail like my dogs do, to see if they are bona-fide or not), I mentioned the work I had done on neural-networks, in the 1990's.  The new technology-lead mentioned that Google had just open-sourced TensorFlow, and that really got my attention.  I had gone to Dr. Hinton's lectures back in mid-1990's, gotten a copy of the Xerion product, bought a copy of Slackware Linux (since Xerion was Tck/Tk and "C" based and ran on X-windows), built a working environment on an IBM P/C, (and learned Linux and Tck/Tk along the way) and had used all this to create datasets, transform them, create and train a neural-net, and build a forecasting procedure to predict commodity future prices. 

The thing sort-of worked, but not as good as I had expected.  Until very recently (March-Apri 2018) I had considered it a complete fail, actually - but I learned a lot.  The Linux and Tcl/Tk stuff turned out to be *really* useful - even more so than the neural-net stuff was, since it just did not forecast very well.  But the skills were platinum, and I got pulled into many other projects, which were both interesting and lucrative.  [ I had been doing a consulting/implementation project for a major Canadian Bank/Brokerage firm.  It was Jim Doak, a really fine fellow who was the Research Director at ScotiaMcleod, who put me on to the Hinton lectures.  I remember being just blown away by what NN's had been able to do.  I got Slackware Linux, built a Linux-box using an 80386 P/C, and got Xerion running on it.  Doak went on to become a venture-capital guy, and then be part-owner of a uranium exploration company that found a massive uranium deposit in Mongolia.  The mine was stolen by the local "Government" (and given to Russians), after the ore was found, and Jim died in Ulan Bator attempting to collect on a World Court 100 million US-dollar judgement.  I read about all this in the public media, so what really happened? I don't know.  But sometimes one guy can vector another fellow's life off in a different direction.  Jim was a very good analyst, and an honorable fellow - like a lot of good guy's in the Canadian finance business. ]

But the open-sourcing of TensorFlow meant I could take another run at the market price prediction ideas.  I had a bunch of ideas for a different approach, including using only booleans (up, down, or zero=don't know/not enough signal).   This approach had promise, and I decided to document the whole process on this website as I proceeded.  There is value in being formal and posting results, because it keeps you on track.

At first, I had planned to use TensorFlow, but the multi-machine loosely-coupled environment I've cobbled together at the Farm, is almost all built on 32-bit machines.  I managed to get TensorFlow installed and working on a new MacBook I had, but the Apple environment is annoying.   (Later, I  bought a 64-bit HP, and built TensorFlow from source on CentSO 7.4 Linux)  But Apple is really annoying. I learned that in my initial work that involved hacking my iPad.  I wanted to use the iPad to run local versions of the trained network, so I could just drop in current price-data (after it was boolean-converted), and get my "Go Long / Go Short / Can't Tell What to Do") market-descision assistant tool, by pushing the data thru the trained neural-net.  And because TensorFlow could not be installed on any 32-bit devices, (the binaries are 64-bit), and the development/build environment is very non-standard (it uses "Bazel"), I decided to use Xerion, as I was able to modify it, and the Tcl/Tk stuff, to run on the modern Fedora Linux boxes I was running (and now CentOS 6.6).

I got all that working - and the details re. building Xerion, hacking the iPad, and the work related to installing my own software on it (basically, the open-sourced DOSbox + a special DOS-based APL interpreter from the days of IP-Sharp) are documented here.   The first-generation iPad is a marvelous, wonderful device.  Mine is circa 2010 - and it is still running strong and I use it every day, despite having a bunch of other tablets and computers.  The UI and the UX are just plain very well done, and even with current modern Android ART stuff (on a Samsung Tablet I have), the Apple is just so much nicer to use.  And with the hacks, that open up the O/S, and give you a Linux-like environment with full "root" access, you get a real computer which can do real work -  like grind thru a few matricies and calculate a result value.  (You can also use "" to watch any Youtube video, as they just render to Quicktime, and the thing just works.)

But it was the Xerion work that comprised the main project that is documented here.  I built the thing, and much of the site is devoted to documenting what I had to do to actually make the old Xerion product work successfully, the design and development of the data-management tool, and the boolean converter.  I had built a Time Series Managment database, and procedures to keep it updated, and corrected, but it ran on Windows.  I converted it to run on Linux. This involved running WINE (a framework that lets Windows programs operate nicely within and under Linux)  and it works great on Fedora and CentOS boxes I have.  Developing the Xerion-based neural-network, training it, and then evaluating it was a fairly big exercise.  You can read all about it here, as I logged the daily efforts.

And it didn't work either, just like Dr. Ng of Stanford, in his excellent lecture series, suggests it probably won't.  Market price action is inherently non-predictiable.  You can have an edge (I know this to be true, because I seem to have one - even though I am not sure what it actually is...), but basically, the recent past has no data within it to say anything much at all about the near-future.   This is the third major project that I have done that confirmed this.  Details are posted on this site.  [April 2018 Update: ... took another run at the data, different data, different methods to setup the boolean training vectors.  Getting >50% accuracy on post-training datasets.  Maybe market is just going coherent?  Might have something..., still too early to tell...  April 28, 2018.  Cash in the bank (only a few K).  Maybe it works?  Realize I probably need to apply Bayesian adjustments, doing my estimates wrong maybe..]

And lots of other stuff is posted also.  I decided to re-think the whole process of how an AI (Artificial Intelligence) device should operate for an investor, in a market context.  And in other contexts, also.  Basically, you don't want to try to predict - because you pretty much cannot.  But you can still make better trades and better decisions to get better than average outcomes.  I know, because I have done this - and so have a few others.  And I am not terribly clever or smart - I am pretty average, and actually, rather stupid and careless quite often - certainly more often than I should be.

So, I decided to build TensorFlow on a 64-bit Linux environment (since I just cannot stand the Apple stuff - I just don't like the problems and issues that restrict, prevent, limit and frustrate me at every turn using Apple OSX.  It is ok - it sort-of works - but the hassles I went thru putting DOSbox on my iPad was just beyond anything reasonable.  

I had built a bunch of freeware apps for Google Android with much less difficulty, and they remain available - at zero cost, and with no in-app tracking or advertising - on the Google "Play Store" (an idiotic name - but hey, I am a dodo maybe...).  

The Android apps I built are documented here also.  You can page down to see them - gDOSbox, GnuPlot37, and several APL interpreters: IBM's TryAPL2, the IP Sharp APL (uses the actual assembler-code for the old IBM 370, and an MS-DOS interpreter), the freeware Manugistics/STSC 16-bit PCAPL, and the Watcom APL.  APL is really good at doing things like dot-product, matrix math, and other tensor-fiddling.

It turned out putting TensorFlow on the 64-bit Linux I wanted to use, was a non-trivial exercise.  The Google/TensorFlow team only supports one version of Ubuntu, and all my machines are Fedora and/or CentOS based.  But it turned out to be do-able.  I had to configure and build a local version of Python 2.7.   But I have succeeded in getting the binary of TensorFlow 1.4.0 installed and running on CentOS-7.4, and using the new, latest Linux kernel, 4.14.9.  The new experimental box is an old HP Intel Core-i3 I bought as a testbed - but it has 4 processors, runs the latest Linux, the latest systemd based CentOS, and is the latest 7.4 variant, which does not go "end-of-life" until June of 2024.  

I had four screen shots showing the results of getting the CentOS-7.4 box up and running, and getting TensorFlow 1.4.0 (almost the latest one), installed and loadable on it.  And note, I am using Python 2.7.14 - the latest (November 2017) Python of the 2.7 stream.  I decided to use 2.7 stream, as that is what most of the documentation and data-science material I can find has as its default.  Plus, if you use Python 2.7.14 (the 2.7 stream), then you can be assured the language won't be *changed* as you develop within it!  (Hear me here... "Stability" is the new killer-app...)

The TensorFlow Tutorial had a simple simulation program that generated images, using Laplace partial differential equation (PDE) math, and the initial program used IPython (interactive Python) and Python Notebook (which is now Jupyter Notebooks), to show a real-time simulation of rain-drops falling on a pond.  I fiddled the damping parameter, the image display mechanism, and the background to create something that looks more like star formation via gravitational mass-accretion.  This program verifies that Tensorflow has been installed successfully, and is doing its mathematics correctly.  Oh, I also converted it from the TensorFlow 1.00 version (which I downloaded initially for the MacBook), to what is now TensorFlow 1.4.0, which has a more restrictive and explicit requirement to define a current session. 

The screen image number three below shows the TensorFlow Laplace PDE program, modified to just display interative .JPG file-based images, so it can be run in plain Python, rather than requiring the "Jupyter notebook" to be run - which basically fires up a local web-server, and then serves up the image material - ie. matplotlab graphics, and such - using your local web-browser, in this case Firefox 52.2, the CentOS-7.4 default browser.  Oh, and actually you have to toggle Firefox 52.2 to *be* the default browser, if you want Jupyter to automatically invoke itself correctly.   I used to have a little screen-image at the beginning of this document, which showed the result of letting the Laplace simulation run 10,700 times, instead of the 1,470 iterations in the image below.  The code for the Laplace sim is provide in the "Code" section - just click on the top-line menu, and you can cut an paste it to check your Python and TensorFlow setup.

Much of the site is basically just the daily weblog notes.  I will re-organize things soon, I promise... Just scroll down to see the: "[Month Day, Year]" headings.  Go to the bottom and read up (it's basically a blog, or a diary), or start at the top, and read backwards in time.

My plan now, is to get the Kepler Telescope data and source-code - which used TensorFlow to assist in the discovery of many new exoplanets - and adapt that to what will be my "market pictures" - so I can use the MINST-style image-classification procedures to classify and charactersize my "market pictures".  It won't be forecasting - just assistance and augmentation to assist us in what we already are doing.

I think that is how AI technology will work.  It won't replace people, it will assist them.  It will just amplify their abilities, like so many of our important inventions have done.  Stay tuned, as they say...  Cool

- Mark Langdon,  Director & Owner
  January-April, 2018 

 PS: Getting rather exhausted, unfortunately.  Need to address IP issues.  Torn between my thinking as a scientist (we should be fully open, and publish our work), and requirements of an entrepreneur, which suggests one's key work must be confidential, lest it be stolen and used by others, or even worse, one risks being attacked by "patent trolls" who sift thru patented stuff, and assert that you are violating someone else's patent you don't even know about.  I may have to shut down this website, for obvious reasons. Sad, as I prefer the approach of the scientist.  But I need to eat, too...  CoolBlush

Latest Results - as of Apr. 13, 2018. Post-training dataset showing 56.25% accuracy. A slight edge...

I had just about written off this approach, when I had another idea. Minor changes, and much *lower value* filter used to create the jump-delta boolean vectors. This puts a lot more information into each vector. And this improved results significantly. Plus, I "tuned" the training activity. This is network number 13, which gets 68.41 % accuracy on the main training dataset, and 58.14 % accuracy on the test data set. The results for the test dataset are shown here. Used conjugate gradient for direction, with line-search (Rudi's). Message to you, Rudi: "Thanx for this!"

Laplace star-formation simulation, running under latest TensorFlow 1.4, on CentOS-7.4 (with latest Linux 4.14.9 kernel). This result matches exactly the TensorFlow run on Apple Macbook OSX - but we are now in a pure Linux environment, using latest code for CentOS 7, Linux Kernel, and TensorFlow 1.4.1. Linux and Apple Macbook now evolve the simulation the same, but I had to create, which uses "double" precision (64-bit) floating values on Macbook, due to curious problems with MacOS Yosemite.)

I have Jupyter/Interactive-Python working correctly on Linux, and this example shows the scikit-image being used to create a gray-scale version of the "testimg.png" file which I built using Python with Numpy and Pillow (updated PIL - Python Image Library), created numerically. I also have exactly the same environment now built on Windows. You need the MS C++ compiler to run "pip install scikit-image", as well as a copy of the "stdind.h" header file. I have created a detail log of what I had to do to get Jupyter/IPython and the routines to do these example, running on Linux and on Windows. I will post it once I edit it down, and remove some of the unprofessional language used in the current version (which is so full of profanity as to be unpublishable at the moment...) I had not planned to put the Python image manipulation environment on the Windows box, but I managed to get it working - after downloading a bunch of material as a test - including the MS-Visual C++ 9.0 compiler for Python 2.7, which Microsoft offers at no cost. More info to be offered in the notes on how I got Python and the image libraries to run on Linux and Windows. I also got Jupyter+IPython to run on Windows, and run the same historgram and line-graph test programs I've run on Linux. The example shown here is Linux, CentOS 6.6 on an Intel box, running a Linux 2.6.32-504 kernel, with Gnome 2.28.2. Everything works nicely. Windows version of these image test programs look the same.

Update: Got it working on the older Linux, also. Very cool. Actually, really surprising. By doing the manual builds, one really learns how the components are "glued" together. You need to get "_tkinter" (the Tcl/Tk interface) stuff working, to render images using "matplotlib", without the "notebook" stuff. (The little window, bottom right, with the gray-scale version of the colour image). This means you can build research-grade, one-off apps, to address specific, immediate needs. (See the "Linux Jupyter/Python" section for some *very* preliminary notes, including my hilarious "real-time" note on what I had to do to get Python, Jupyter, IPython Notebooks and Scikit-Image with Tcl/Tk image-rendering working on my Linux laptops. I should edit it up, but I think it is perhaps helpful to see what the user-on-the-edge-of-the-network faces when trying (successfully!), to get software working. (The Linux laptops are truly "Cyberspace Deck's", as per William Gibson's famous 1984 "Neuromancer" novel, published when the IBM P/C was 1 year old.)

Neural-Network-AI Experimental Results: Developed portable Xerion + TSM + Lynx(ssl-enabled) + GNUplot platform on Linux (Fedora/Redhat) laptop platform, (ACER with Intel Centrino). This Linux laptop (Gnome Desktop) also runs current Firefox (modern gtk+2, glib, gdk, etc.). Wine - Windows emulator on Linux - is used to support a runtime-version of TSM, the Time Series data manager, which transforms raw price data into training cases for the Xerion-configured neural network (NN). For the current NN-driven AI under test, the training is sourced with boolean impulse-data from various daily market prices for tradable securities and commodities, for an 18 year period. The resulting neural-network can be evaluated for current datasets (ie. the last couple of weeks) on either this platform, or using an iPad or Android tablet.

[June 28, 2017] - New image, with Probability Calculator, Time Series Manager, (with linked GNUplot graphics), Xerion NN-AI (cmd-line mode runs GNUplot display, Xerion gui showing Hinton Diagrams of network unit values for most recent data case). The "plotValues" tcl/tk prgm shows boolean training target, and output of network boolean prediction in bottom, centre chart). All is integrated using the Fedora/RedHat platform, running on the dedicated AI box, an Intel 32-bit uniprocessor. Linux utilities "DOSemu" and "Wine - WINdows Emulator [or "Wine Is Not and Emulator"], used to support Probability Calculator app, WINE used to run Time Series Manager. Xerion was compiled from UTS source, with various minor modifications to support modern (sort of) Linux kernel (Fedora/RedHat Kernel #1 SMP. (Kernel is "old" now, but has a few custom bits compiled in)). Everything together at last, and running well. Results looking good - both technology, and market tone. Note that I modified the GNUplot display of "Actual" vs "Network Forecast" to show the predicted boolean output on the top (green line), with the actual training target on lower line. This makes it easier to see most-recent predicted network value, which can be expected to drive one's tactical market efforts. FD: I remain fully invested, long.

Here is image of tanh (hyperbolic tangent) function from Gnuplot37, overlaid with hypertanf sAPL function from "neuralxr" workspace. This sAPL workspace will accept the MNnet4~1.WTT file of Xerion weights for the MarketNet network, and use dot-product to vector multiply the weights to "activate" the Xerion-trained network. This will let me "run" the network, on the iPad. I wrote the function to load the Xerion weights file into sAPL, (format: wt <- readfile fname) and second function to convert the text into numeric (format: wnet <- procwt wt). Currently, wnet is just a high-precision vector of 1281 32-bit floats. Since I'm using hyperbolic tangent instead of logistic as my transfer function, I needed to write this tiny transfer function. The tanh function already exists in GNUplot37. You can start GNUplot, and just enter "plot tanh(x)" and see this S-curve, which is the mechanism by which machine-intelligence is stored in a neural-network. Getting closer for an NN-based iPad-runable Augmenter. [Update: I wrote the function on top-left, but then remembered the APL built-in trig. functions, and yes, "7oX" gives hyperbolic tangent for X. The "o" operator is "ALT-o", and when used dyadic (two arguments), it gives access to all the trig. functions. With full precision of 18 digits enabled, the built-in "tanh" function gives slightly more precise results.]

This screen shot from the Linux AI-box is a quick way to post results - not sophisticated, but clear. Speaking of "quick", I used the "quickProp" method here, which models derivatives as independent quadratics. The method tries to jump to the projected minimum of each quadratic. This is one of the minimization methods in Xerion, and it has worked well on my signed boolean data. (See: S. Fahlman "An Empirical Study of Learning Speed in Back-Propagation Networks", 1988, CMU-CS-88-162, Carnegie-Mellon University.) Typically this method uses fixed steps with epsilon of 1, but I used a line-search here. The error value (f:) is driven down below 300, with a gradient vector length of less than 6. From the plotValues.tcl chart, one can see it improves on the previous result. If this network is this good on a different dataset outside the training example, then we might just have something here. I want to thank Dr. Hinton and everyone at U of Toronto for making Xerion available.

Running Xerion with gui, running backpropagation using conjugate gradient and line-search, with new network with twice the nodes. Error level (F:) down below previous 20 node network in less than 400 evaluations. Looks promising...
[Initial Results: - MarketNet was built using signed boolean jump coding. Note that for the graphic (Postscript output, shown using GhostView), I tweaked my plotValues.tcl displayer to shift the actual data +3 up, so it does not obscure the network output forecast. The network is called "MarketNet", and is not fully trained, as I need to reset the "tcl_precision" value to 17 (from its default of 6). With improved precision, the network trains further, and should become more accurate. What one needs to do, is save the weights, and then try the network on a dataset built for a different time period. This will provide indication of whether I am just training to noise or not.]

Network Evaluation Results - May 18 to July 21, 2017. The results show that this version of the network cannot accurately forecast the 4-day-forward data value. Co-efficient of Accuracy is 24% - less than 1/3rd, so actually worse than random. This indicates that there is not sufficient information in the dataset (transformed data for 5 days back, across 6 different price series: - SPX, DJIA, BCE, SpotGold( 3pm London fix in US$), Spot_Oil (WTI Cushing Hub US$/bbl) and CM) to make a useful forecast. I had expected results might at least be close to 40 - maybe even 45%, but such is not the case. One can make money trading securities - but forecasting future price levels - even if the data is boolean classified as just higher, same or lower, is not possible here. More data, across a longer time period and with different transformation methods, may improve the network's ability to predict. But this evaluation currently shows the NN-AI has no ability to make accurate predictions of future market direction for the target security.

Field Notes on AI from Lorcalon Farm

Neural-Network Artificial Intelligence: Xerion & the Helper-AI's, APL on an iPAd to "run" the network

GEMESYS Ltd. is the name of my consulting practice.  We do research and analysis in science and technology, with a view to learning, teaching, and helping.  And we look for special economic situations that work.   GEMESYS Ltd was established in 1981, and continues to offer research and consulting services to address unique requirements.  We operate from Lorcalon Farm, in Canada.  (The image at right was made using the laplace partial differentiation simulation example from Google's TensorFlow tutorials. )

Why Do Datascience? & Why use AI?

Since the 1990's, I've done data-science related work under the radar, as it were.  I've even built amplifiers and radios to learn about feedback processes.  (Building and tuning an actual, physical device teaches one so much.  The math of it gets into your fingertips...)   I read George Soro's stuff on "reflexivity" in the markets (circa 1980's), and I think I am beginning to understand why "technical analysis" actually works.  We used to think it was because it captured the behavioural economic features of humans (cf. Amos Tversky, Daniel Kahneman, Richard Thaler et al), but now I think there is more there.  If you need to make money using the markets (ie. to pay your bills), you either go broke, or you end up using some form of technical analysis (or, you become a portfolio manager, take a percentage of the assets, and you don't care what happens, as long as you can keep your clients.)  But now, there is hard-core datascience, which lets many different ideas to be looked at all the time.  Having a good AI helper, with statistically significant results associated with its predictions, I suspect can give one an edge, even if much of the data one encounters is mostly wild randomness.   As a lone-wolf in private practice, you either have a verified edge, or you are quickly carried out, and fall into the abyss.  And it seems AI can give you an edge.  [Mar. 31, 2017.  Well, I guess it's confirmed:  US-based Blackrock, one of the biggest investment funds on the planet now, with $5.1 trillion in assets, has announced that it will sack a bunch of its human stock-pickers, and replace them with *robots* - the term Wall Street uses for AI-driven investment strategies.  Source: Wall Street journal article, Mar. 28, 2017.] 

Blush   As time goes by and markets change, I just keep getting more evidence of how any *model* is going to be successfully gamed by the market.  You don't want a model, you want an old, experienced guy to offer some gentle advice.  Since there is no such guy - a *very* well trained AI might be the next best thing, perhaps?]  

Status Log (Artificial Intelligence/Xerion/Data-Research work) - and most recent; building a CentOS-7.4 64-bit Linux platform (for TensorFlow):

[Mar. 27, 2018] - Interesting Xerion results to report:  Used different price datasets to assemble boolean jump vectors, made the jump-vector filter delta smaller, and ran NN which was previously trained on two epochs - original Feb 08 1995  to May 26 2017, 5614 observed price-days, then trained on second epoch which was originally test&check dataset (June 8 2017 to Feb 12 2018), 172 price days. Got 68.41% accuracy on main dataset,  58.14% accuracy on smaller dataset.  Ran against 3rd dataset from Feb. 13 to Mar. 23rd, 2018 (20 days), and got exactly 10 for 20 (50.00% accuracy). Noteworthy is that my BJV-NN successfully forecast the unusual  downspike in price of target security prior to its going ex-dividend (an event that generally does *not* usually occur).  Had I taken the trade, (FD: I didn't = still holding), the delta gain to me would have been approx. $10K).  Market is in an unusual flux currently, given trade-war talks, and the (maybe Mossad?) poisoning of the retired Russian spy in England.  If you wanted to damage Russia at low cost, this was clever & sadly effective  tradecraft.   Strange times..  I read on the FB TensorflowML page, about some fellow who built an Etherium-based game, and made $500,000 on his first day of offering the game in the wild.  Seems a little extreme..

 [Mar. 26, 2018] - Weather has become sunny, but cold.  Rain is forecast.  Reviewing TensorFLow tutorial for MNIST stuff.  (Digit and face recognition works.  But the code environment is a bit of a  hot mess...  or perhaps I am being too critical.)    Xerion requires I down-convert Tck/Tk to 7.3, which is a pain on a new machine (the Black Asus, which now has Lynx, WINE and the TSM-Database all running nicely..)  But the NN is the AI edge...

[Mar. 16, 2018] - We're back into a deep-freeze which suggests (to those who follow trends and like bell-curves) that spring remains far in the future, which of course is not the case.  The high-volatility weather, combined with the relentless certainty of the seasonal changes, is perhaps why farmers can become good investors. 

[Mar. 02, 2018] - Upgraded the research LAN.  Got a tiny file server that runs an embedded Linux, and using Samba, now have all the Linux and Windows boxes able to access a common set of files.  Since I deeply distrust "cloud" backup, this approach is nice.  The Amazon S3 stuff is cheap, but creates external dependency as well as security concerns.  I now want to see if I can replicate the Xerion AI results with TensorFlow.  I've built the TensorFlow MNIST number-image recognition example, but they gloss-over the data preparation issue with that example.  I want to build a perceptron NN with a single hidden layer, and then try it against my boolean jump-delta data, and see if I can replicate the Xerion results.  TensorFlow has an attractive scalability which will be useful, if I can get this idea to work. With a big GPU, I could add in a lot more data.  I am certain this type of thing is already being done by Goldman Sachs, Blackrock, JPM, etc.   The twitchy behaviour of the markets argues for this.  There is too much volatility of volatility.

[Mar. 01, 2018] - Crazy busy outside.  Warm weather brought high-winds and tree-falls.  Got the two-inch ball for the tractor installed, and can now move the big wood-splitter.  Simple, primitive technology, but very effective.  Like software, the grief or the success of a system, generally driven by two things:  the ease-of-use of the interface, and the overall system reliability.

[Feb. 21, 2018] - I think I get the CNN (convolutional neural network) idea now.  I found I could train the Xerion Jumpvector AI down to 91% accuracy, but could not get better than 45% accuracy on the post-training test dataset. Obviously, as I train tighter on the training data, I push the training first towards the signal, then away from it, as I train down to noise.  I ran 13 separate train-then-test exercises - creating 13 different network weight-matricies.  They all perform pretty much the same, despite randomizing the net weights each time.  So I trained part-way (to about 72% accuracy) on the main training data - then loaded in the test data, and trained part-way with it, then re-loaded the main training data, and trained a bit more.  I stopped the training early (used conjugate gradient direction selection, with a line-search step method.  This gives me semi-final results for the 13th network weight matrix of 68.41% accuracy on the primary training data (now running from Feb. 1995 to June 2017), and a level of 58.14% accuracy on the test data set.  If this result holds going forward, then I might have my edge.  The true signal is very weak and fraught with noise, but it looks like it might be there.  This is like tuning a radio circuit on a regenerative radio receiver (not a superhet).  If I understand how the Laplacian heat-transfer calculations work, then I think I might also understand how the CNN idea works - and more importantly - why it works.  Each network node ripples its results out to others nearby - probably like neural state-pontential leakage would actually occur in our own brains.  This gives us edge-overlap in neuron clusters, and facilitates the fuzzy-state memory model that we use to store information.

[Feb. 20, 2018] - Revisited the whole Xerion-AI project again, and I found something I didn't try - which was kind of obvious, actually.  With my third version of network weights, for the 4500 record boolean jump-vector dataset (which only trains to 87 % accuracy), I can now get 41.17% prediction accuracy on my test boolean jumpvec dataset, for July 27 to Feb. 15, 2018.  That is a staggering improvement from the 24% accuracy I was consistantly showing.  If I can push the accuracy up above the 50% level, then any casino gambler will tell you that you might just have an edge, if one can manage the betting very carefully.  Oh, plus, I figured out how to use the "ssh -X userid<at>AI-BOX"  trick, to let me run full X-windows stuff remotely (Xerion is an X application, as it uses WISH, the Tcl/Tk shell).  Plus I use GNUplot to graph "Predicted Values" versus "Actual Values".   I may have been too quick to dismiss the jump-vector approach.  (The AI box running Xerion is in a different room.)  From my CentOS 6.6 box, (on the "bridge", as it were), I can use "ssh -X" to run everything remotely - the Xerion session window and the GNUplot graphs get displayed correctly.  I don't even need to run "startx" or even log into the AI box (which runs an older Fedora Linux).  Plan now, is to hold data constant, and just retrain with the idea I came up with, for a "version 4" of network weights, and see if I can push up the prediction accuracy over 50%, on the post-training test dataset.  If that is possible, then I might have something, as I have updated the TSM price database, and I can now build the boolean jump-vectors with just a button-click.   

[Feb. 19, 2018] - TensorFlow experiments.. I built the MNIST stuff, loaded sample MNIST image datasets, setup a simple network, trained it, and got accuracy of 90.18 %.  The TensorFlow stuff does seem to work, but it is perversely difficult to actually find out how to use it in a practical sense.  I have a boolean dataset set (from the Xerion AI work) which I would like to run in a TensorFlow NN, but there does not seem to be anything that illustrates how to actually read in actual, real datasets, setup example cases and run the training on a standard neural network.  Associates suggest I use PyTorch and avoid TensorFlow.   The MNIST tutorial stuff is cute, but classification of images using the CNN (convolutional NN's) is not the only thing one can do with this technology.  There does not seem to be any stable TensorFlow documentation, and this means it is maybe more of a kids playground than a real tool for doing practical development.  And if one files bug-reports, the TensorFlow authors do not seem to even look at them.   This is all just a bit annoying, and may force me back to using Xerion, where at least I can get something built using real data.  And where I can also have some confidence that the arithmetic is being done correctly.

[Feb. 18, 2018] - Updated the Xerion NN-AI for dataset from June 27, 2017 to Feb. 14, 2018.  Accuracy co-efficient is 23.4 percent - random results with a couple of degrees of freedom, I am assuming.  (Two purely random variable, each with a 50% chance, so we get a consistant percentage of correct predictions around 25 percent.  Not much use, I fear.)  See third screenshot from top.

[Feb. 05, 2018 ] - Ugly mkt action provides opportunity for new positioning.  Been focused on mkt action.  Curious how all the news services completely miss the trigger events, and interview analysts who prattle about US Fed. Reserve and interest rate delta's and 10 Yr US Treasuries at 2.85%.  Ho hum.  No new news there.  Fully discounted, of course.  It is the Bitcoin/NIM meltdown, and the $500 million theft from Coinbase in Japan which is driving this.  (Bitcoin Investment Trust thing in Cda did a 91 for 1 split recently.  Perfect indictor of a market floating on foam and fluff-puff.)  Bitcoin and other cryptocurrencies is a great idea for a transactional tool - but as an investment, it is just another mechanism to prevent compound growth from operating long-term.  Important to keep the world poor and hungry or no one will go to work, will they?   All investment *schemes* must vaporize wealth, or within 300 years every family would have billions of dollars, wouldn't they?   Studying CNN's (convolutional neural networks, from Stanford course:  CS231N

[Jan. 30, 2018]  - resolved the floating-point numeric-divergence issue (between Linux box and MacBook Pro) by recoding the program to use entirely 64-bit floating point variables instead of the 32-bit floats that is was originally using.  Confirmed Macbook Pro running Xcode 7.2.1 / Clang 700.x under MacOS Yosemite 10.10.5 has problem doing floating-point calculations on 32-bit float variables - at least in TensorFlow and Python with Numpy (Numeric Python library).   Once the program was recoded to use dtype=float64 instead of dtype=float32, the simulation ran as expected, up to 57500 iterations, showing evidence of image with complex, chaotic moire-patterns, as per the image at right.  Full details in the bug I filed on the github TensorFlow bug-tracking forum, here:

[Jan. 12-24, 2018] - This "numeric divergence" issue is serious.  I keep getting different numbers on the MacBook (MacOS 10.10.5) (other reports same divergence on MacOS 10.12.x) and the Linux boxes (CentOS-7.4, with current Linux kernel).  Going to first principles, downloaded & converted floating-point test suite: UCBTest, from W. Kahan & Sun Microsystems, circa mid-1990's. (UCB=Univ. Calif. Berkeley).  I now have gFortran and C version of Kahan's PIRATS program producing proven different results on MacBook and Linux 64-bit platforms.  Research continuing - converted entire test suite, and checking it on each machine.  The UCB tests using gFortran PIRATS program (which shows different results on each platform)  factors out TensorFlow, and the C language version of the same program, factors out gFortran as possible causes of the calculational divergence I am seeing.  Actually, TensorFlow could still have an issue with how its convolutional neural network training process works - some users have reported differences between how TensorFlow 1.4 and 1.2 works, with 1.2 running a test example correctly, and 1.4 not doing so.  But my focus is this *difference* between floating-point calculations on the MacBook versus the Linux platforms (CentOS 7.4 and Ubuntu 17, apparently) - all running fully 64-bit compliant O/S's.  The UCBTest suite compiles and runs all from Makefiles, & I've got it converted and running on both platforms, and it is showing different results on each each, in more places than on just the PIRATS program.  I published the PIRATS source for both gFortran and C to TensorFlow Github bug-tracking site.   Details on this issue are provided there.  URL is:

[Jan. 11, 2018] - still running tests.  A helpful fellow on TensorFlow github issues site has tried my program on Ubuntu 17 and a Mac running Sierra (MacOS 10.12.6), and the sim evolves to the moire-pattern image (generating tensor has large postive and negative floating point numbers).  It is only on the Macbook - both on binary installed versions of TensorFlow 1.4.1 with binary installed Python (running unicode=ucs2), and Bazel-built TensorFlow 1.4.1, with locally built-from-source Python (2.7.14 in both cases), show same behaviour - ie. sim evolves to large positive numbers only, a white-image.  No idea why, architecture almost the same (all Intel 64-bit multi-core), Python the same, TensorFlow the same, all Python packages the same, everything appears working ok.  But very different results with MacBook 10.10.5 Yosemite.   MacO/S bug?  TensorFlow bug?   Still unsure at this point.

[Jan. 09, 2018] - Trying to track down why I am getting *very* different behaviour of TensorFlow on Linux (CentOS-7.4) and Macbook (MacOSX Yosemite).  Built TensorFlow from source on MacOS and also installed binary.  Posted this to TensorFlow's issue tracker, and opened question on StackOverflow.  Mystery - but looks related to possible 32 bit overflow happening on Macbook side. (Similar bug was in a library called "bottleneck" that used to be part of pandas data tool package for Python.   A fellow with Ubuntu 17.10 which I think is a supported TensorFlow platform, provided his results, which match what I am seeing on my Linux box.  Something is amiss, looks to be a bug in TensorFlow's math.  Or maybe not.  Results should be the similar on each machine.  Issue and Question URLs below:


[Jan. 06, 2018] - Solved it!  After successful build of Tensorflow 1.4.1 on Macbook, I found it was impossible to "pip install ... " the created wheel file into Python 2.7.14 on the Mac.  Tried a bunch of ideas, and tried both my custom-built and original Python 2.7.14 (you can flip between Python custom-built and the one installed in "...Framework" space, by just putting Python locations (eg: /usr/local/bin/ ...) at beginning of path by editing the ".bash_profile" for your standard login ID on the MacOS, in /users/<your-user-id>.  Both Pythons failed to install the successfully built .whl file - each reports "... is not a supported wheel on this platform.".   Solution was crazy simple - just rename or copy the newly-built wheel file from: "tensorflow-1.4.1-cp27-cp27m-macosx_10_4_x86_64.whl"   to  "tensorflow-1.4.1-py2-none-any.whl", and pip installed it into my locally built version of Python 2.7.14 just fine, no problem, even removing the previous (binary installed) version of Tensorflow 1.4.0.  Confirmed with "" (lists all installed Python modules), and then ran several test programs to confirm that newly-built-from-source TensorFlow-1.4.1 operates exactly same as binary-installed TensorFlow-1.4.0.  Fine result.  See closed issue on github for what is basically now: "How to build Tensorflow on a MacBook From Source" details:

[Jan. 04, 2018] - Just completed successful build of TensorFlow 1.4.1 on the Macbook!  I am surprised - did not think I would be able to get it all fully built on my Mac, which runs Yosemite 10.10.5.   What I did:

  • Started with MacBook Intel-based Core-i5 (4 processor 64-bit machine), running Apple's MacOS Yosemite 10.10.5 (last before they put the SIP stuff in to block root)
  • Had about 4 different Xcode versions - tried to use current (Xcode 6.3x, which I had needed to put my DOSbox stuff on the iPad-1), but had to upgrade to Xcode-7.2.1, the last one that can be run on Yosemite. Had to explicitly create "/Applications/Xcode7.2.1" and copy the Xcode7.2.1.dmg file downloaded from Apple to that subdir, where it becomes "".   You click on that to further install some debug stuff and other things it needs.  You don't need a "Developer" account - used my Apple-ID I had to make when I got the iPad-1.  Note: Once you get it, start an Xterm window, become root, and enter: "xcode-select -s /Applications/Xcode7.2.1/"   This makes it your "current" Xcode. Check with "clang --version".
  • Downloaded - from the Oracle site - the JDK-8 (Java Development Kit) for MacOS and install it as per their instructions.  Bazel needs this..
  • Created dir "/home/Bazel" and download Bazel-0.9.0 (latest as of Jan 3, 2018), checked it with "shasum -a 256" and compared it with the site's sha256 version.  Then, unzipped it by clicking on it.  From within an Xterm shell, as root, you run: "./" and it should build it.  Did it last nite as an experiment, and didn't expect it to fully compile and install, but it did.  Whoo-hoo.
  • I had already built and installed to /usr/local/bin, a custom-built version of Python 2.7.14 (lastest from the 2.7 stream, as of Nov. 2017), and I had installed all the needed Tensorflow packeges.  I already had a binary version of TensorFlow running on the Macbook.  The Tensorflow build from source, needs Python to have six, numpy and wheel. 
  • I had tried and failed to build, with Xcode-6.x, so in the /home/TensorFlow/tensorflow-1.4.1 directory, I did a "bazel clean --expunge", to remove and reset things.  Xcode has to be 7.2.1 for this build to work, I think...
  • the bazel build command, entered at the Xterm window command shell, (as root), was:  "bazel build --config=opt -incompatible_load_argument_is_label=false //tensorflow/tools/pip_package:build_pip_package"
  • this is as far as I am now.  Next step is to try to build the ".whl" or wheel file, which is then used as argument to "pip install ..." so you can load your locally built copy of TensorFlow into your current Python.

This is a big result.  I did not expect to be able to actually do this, without more upgrades.

   Results of the TensorFlow 1.4.1 Build on the MacBook (Yosemite, MacOS 10.10.5): 

       Elapsed Time: 3985.215 s

      Critical Path: 132.83 s

      Build completed successfully, 4044 total actions

Here is link to the Bazel site notes which tell how I built Bazel from source:

First attempts to build TensorFlow were very problematic.  But I found this note below by Google-searching all the earlier build errors I was getting from using Xcode-6.3.x.  A useful note is this one below, because it tells how to use your GPU to speed things up (this is the CUDA stuff, I think?).  This note describes how to built it on a Macbook, and use the CUDA and GPU options...  It was helpful in showing what I had to do to get the Xcode7.2.1 to properly install on the Macbook.  (FIrst attempts simply did nothing.  You have to create by hand, the Xcode7.2.1 subdir in /Applications, and put the .dmg thingy there with the Apple visual copy. Otherwise the O/S won't prompt for authentication, it looks like.  It just does the copy and then you see *nothing* at all..!).  The note below has details: 

Also, here is link to the Apple stuff, where you can download Xcode-7.2.1.  You login using your AppleID (don't need Developer Account, which costs money), and if you get the sparse screen with nothing on it, go to the bottom, and click on "More stuff..." or whatever it is called.  You then get a proper organized table of real software.  Find and download the Apple Xcode-7.2.1.dmg file, to your "Downloads" directory on you Macbook:

Hope this helps.

[Jan. 03, 2018] - Like that old Steve Miller song - "Time keeps on slippin' - slippin' into the future...".  All night (almost) trying to hack TensorFlow1.4+Python2.7 onto the Apple Macbook so it's results (from the image stuff) match the CentOS7.4 Linux-box results.  (The Linux-box stuff is now really working fine. The grief, as usual, is coming from the Apple-side of things...)  The bizzaro initial quasi-fractal image at top of this site, is generated with a modified version of the Laplace PDE example the TensorFlow guys provided in their tutorial.  Turns out to be a good test-thing, since it highlites this problem I am having.  TensorFlow binary installed on the Linux box uses 4-byte unicode character representation internally, but the MacBook version uses 2-byte unicode character representation.

For an image-generation exercise, it means I can build these really interesting, complex images - but *only* in the Linux version, where Python is compiled with "--enable-unicode=ucs4".  This is a big deal.  Why?  Because I am getting *vastly* different results between the Python+TensorFlow I have on Linux, and the Python+TensorFlow I have on the Macbook. 

I first just installed MacOSX Python 2.7.14 from the latest .PKG file from Python - fine stuff, no problem - except it is hard-coded as "--enable-unicode=ucs2" I think.  So, I took my Python tarball for 2.7.14, and - since I already had all the build stuff (gcc and xcode) on the Macbook - built a local version of Python with unicode=ucs4 enabled.  This all worked - except now, the TensorFlow binary for the MacOSX would *not* import successfully into Python 2.7.14.  It could be installed ok, but fully kacked on failures to find yatta-yatta-usc2 stuff in the dynalibs.  Ugh.  And of course, when I backed out the change, and re-built Python 2.7.14 with the "--enable-unicode=ucs2", nothing at all would work, since all the "pip install <yattayatta>' Python packages (numpy, matplotlib, Pillow - and so on) blew up because they were all looking for <varname>usc4 type stuff.  (Thank the Gods of Programming that the developers are using explanatory var-names.  It made it crystal clear wtf was kacking, at least).   I had to "pip uninstall ..." most of the packages, and then re-install them.  Once that was done, TensorFlow could be successfully imported and run on the Macbook.  I spend *hours* on this - and basically aligned *exactly* all the version-numbers of *all* the packages on the Linux-box Python and the MacBook Python.

But here is the rub:  Because the TensorFlow binary is built for unicode=ucs4 in Linux-Land, and unicode=ucs2 in the World-of-MacOSX, running a TF program on one platform can produce ***wildly*** different results between platforms - even if you are running ***exactly*** the same code, and both machines are 64-bit!  For me at least, and the goofy stuff I want to do (I want to *create* images first, and only later, will I try to classify them) , this is a really big deal.  Duh - ya can't have two completely different results being generated on two software-and-hardware identical platforms!  (Actually, you can,  if one platform is using 4-byte character encoding, and the other platform is using 2-byte encoding, FFS!    At least I think this is the issue.  I can't even test this idea, until I can sobtain or build a "unicode=usc4" version of TensorFlow for the Macbook.

So, it means I will have to build from source, the TensorFlow thing.  <sigh...>.  I am not really a very clever guy, and thought I could just hand-wave my way thru this (ha ha - like those proofs in advanced math classes - I would be confused, - and even the Prof. would get to the "hand-waving" point, where he could not explain how he got from here to there... Ya just wave your hands around and try to intimidate the dumb-arsed students, no?  We called this the "Hand Waving Proof" approach...).   But it looks like I am going to have to try to build TensorFlow from source - just to get my Linux version and MacBook version actually producing the same results! 

The example is the image at the very top.  It is built on the Linux box.  If I try to build that image on the Macbook - using *exactly* the same code, same packages, same versions of *everything*, I just generate a blank screen!  The generated image on the Linux box (Python, packages, & TensorFlow: unicode=ucs4) is a complex moire-pattern thing, with beautiful fractal nature, and the .jpg file is about 200K.  On the Macbook, (same Python, packages & TensorFlow: unicode=ucs2), I get a competely blank white image of nothing, and a generated .jpg file around 6K...(!?!).  There must be an implied 4 byte to 2 byte conversion going on that is just trashing the fine precision of the numerically-crafted image.   Mad

Like I said, this is a pretty damn big deal, and it is a complete "show stopper" for me, with the only solution to build consistent versions of TensorFlow that match each other.  I just won't have any confidence in the results, unless I can get basic number-crunching to work the same on both platforms.  I am guessing I have to run everything as unicode=ucs4.  I searched for a MacOSX version of TF that is built with the "unicode=ucs4" option, but could not find such a thing.  If I can actually make such a thing, I will put it up on my github page (which only has the Sharp APL interpreter, at the moment, as I physically have the license that allowed (& encouraged) re-distribution of that code.).  As long as you build and install your own Python (with the --enabled-unicode=ucs4 parameter to the ./configure thing), then I notice that all the "pip install..." Python packages seem to be smart enough to toggle themselves to ucs4 from the default ucs2  (as they blow up, if you try to use them after rebuilding Python back to ucs2 from ucs4...).   

So, I am reading about Bazel... the build environment one needs to assemble TensorFlow from source...  (and oh sh*t, I have to involve the JDK 8 giant dog pile?  <ugh> )... I  really don't like the modern trend toward hyper-bloatware.  Python is brilliant (sort of..), and TensorFlow is a cool idea - but looking at what I have to do to get TensorFlow to compile, I am getting that same ill feeling that I had after I ran my car into the ditch up at Healey Lake when I was a  teenager...  Cool

Note: Some info on an Android Camera Demo, that uses TensorFlow or a light-version variant that can run on Android - maybe.  Initial examination tells me I have, sadly *zero* chance of getting any of this working for now.  The use of "gradle" and "bazel" and the bloated complexity they introduce means you need hard-core, domain-specific full-time teams of folks just to do anything now, sadly.  I suspect there is money to be made in creating development tools for real-world AI apps that aren't bloated nightmares of terabytes of cross-dependent, ever-changing, fragile, kidcode.   The new sdk's and dev environments are really awful - so much bloat, for so little gain.  There has to be a better way...  But here is the link.  The demo image is very impressive...

The Android stuff is interesting, but beyond what I can do right now.  I am anxious to try to have some basic image recognition that actually works - but my recognition will be done against synthetic images - sort of like the first one that is shown at page top.  The TensorFlow team have some example code.  Check this page, and scroll down to see the picture there...


[Dec. 31, 2017] - Note: See the "Code" menu-option to see the TensorFlow 1.4 + CentOS-7.4 Linux version of the little Laplace simulation program that creates the "exploding stars" image at right.  The image at right here was created on Macbook, using Tensorflow 0.10.  The minor mods to make it work on TensorFlow 1.4 and Linux are provided in the first example in the "Code" section.

Also: Updated the scribbled-notes to reflect the Python 2.7 build configuration needed ("--enable-unicode=ucs4") and the TensorFlow 1.4.0 install url for use with pip.  Just to confirm, ran the small linear-regression example, and confirmed the Macbook TensorFlow 0.10 version and TensorFlow 1.4.0 version on the CentOS74 box gave same answer. (My regression test is a test regression... and Able was I ere I saw Elba...).   

I am finally back to worrying about data, instead of process. If I can get the Kepler example, than that, with the MINST stuff, should at least let me build an image-classification engine.  Then, I can write some code to construct "market images" (I can bit-fiddle a .jpg or .bmp file now, with Python.  The complex compression sh*te used in .jpg images probably will force me to use bitmaps - but I can convert between bitmaps and jpegs now, again with Python stuff.  And I have Krita and GIMP and other stuff to directly fiddle the images, if I need to make some test-cases or limit-condition type of pictures.   If I do all this AI-Ver.4 stuff, and feed the AI all the market info, and end up with a picture of a cat's face, I will be less than happy.

More thoughts on where my edge is coming from:   I think it might be history.  I've had a good year.  Why?  I've been a history geek (basically, a closet historical-scientist, really.).  I once visited the History Department of a modern University, and inquired about enrolling or applying - and all I will say, it was *not* Waterloo.  (University of Waterloo is the best University in Canada - but I'm not sure if they even do formal "History" now.) 

The experience of just chatting with a few 'History Department" types at this big school, really (I mean *really* !) put me off the idea of wasting any time in the academic cloister as far as formal History study was concerned.  The University History departments are laughingstocks.  They are full of these people who are homosexuals, lefties, and outright Marxists.   (I want to be like Seinfeld, and add "Not that there is anything wrong with that!" - except it is not quite true in this case.   These people with their gay-lefty-Marxist agendas are doing real and lasting damage to the formal study of History.  It is really sad.).  History didn't end, but the study of it maybe has.

And if we don't pay attention to History, then we are all like Bill Murry in "GroundHog Day" - we will all just keep repeating the same mistakes over and over, until Darwinian Evolution deals with the problem of species failure.

The only way to study History is just to read everything that real folks wrote - from Herodotus and Pliny the Elder, to guys like Sydney Homer and his "History of Interest Rates", probably the best book of hard-core historical research I have ever read.    Great discoverers like Liebnitz and Micheal Faraday both read entire libraries of books. (Liebnitz was secretly given a key to the locked King's library, and Faraday had to leave school, and apprentice in a book shop - where he read every book, apparently).   If you want to be a successful investor, you have to pretty much do this too.  And don't just read business books.  Read everything.

Oh, and I have an example:  George Soros fancies himself a Historian and Philosopher, and guess what?  He *really* is.  He seriously is a serious, (and even sometimes wrong, but not very often), scientifically focused analyst of history.  His work on what he calls "Reflexivity" is very good.  Don't just read it because it can make you rich (it can), read it because it is clever, insightful, and wise.  It is a well thought-out and tested idea - which is *vastly* more than most formal "historical" work is.  And here is the thing about Soros.  He will likely die very rich, and that is the metric of success, no?  Most famous, widely-successful investors who make really big money, eventually blow themselves up and are carried out, after losing all their money.  Jesse Livermore, Neiderhoffer, the entire staff of the company with brightest folks in the world at the time (PhDs and Nobel-prize winners), ie. John Merriweather and his Long Term Capital Management guys, (they kept adding to a very big bond position that was going the wrong way... like idiots, until they were out of money),  almost everyone who got rich in the stock market of the 1920's, the tochi-korogashi (Japanese for "snowball investing" - rolling levered deals over and over until you have bigmoney) folks in the 1980's who got rich - and then destroyed - on Japanese property, the Hunt Brothers who tried to "corner" the silver market (and were undone by a simple rule-change on the Chicago Merc.), Nick Leeson (the famous "rogue trader" who took down Barings Bank), etc., etc., etc.  If I sit here and think about it, I can probably name 40 or 50 detailed accounts I have read of super-successful guys who all blew up.   Soros is the one super-successful investor who did not blow up.  It is because he figured something out, and then implemented it right.  He was and is, a scientific historian.

See, no-one except religious lunatics, who believe in idiotic superstitious absurdities like "gods" and "demons" and "angels", and gay people, who want to promote their political/sexual views, care anything about gay stuff or other homosexual people's sexuality.   If Turing was gay, that says nothing about his genius.  It is just sad that social managers and other moralistic governmentalists made his life painful.  History is about the progress he made - not the boys he f***ed.  No one really cares about that, as it does not matter.  Yes it matters to some, but it should not.  There is this apocryphal story one of my Economics professors told us, of economist Joan Robinson at Cambridge, in the late 1930s, or early 1940's, running down the hall and into an associate's office, saying "Oh damn. The Americans have just found out that Keynes is a bugger, and we're afraid they're going to repeal the Full Employment Act."   (They didn't... And Keynes got married, so maybe he wasn't even as gay as he is rumored to have been.   His gayness is not the key fact.  The key fact is that his original PhD work, and his degree, were in Statistics, not Economics.  I am pretty sure that this is a key part of why he was a successful investor.  He had a deep understanding of statistical probability, and also understood the gambler's instinct, which he explicitly commented upon in the General Theory and in other documents.  Go find it, if curious.)

And no-one except annoying politicized females who spout feminist rhetoric care anything about the modern women-in-history material.  It is all just political noise now - and the whole point of *study* of anything, is to figure out how to cut thru the noise, and find the tiny kernel of truth - if there is really one there.  Lots of times, there simply isn't anything there.  You peel away all the layers of crap, fraud, blather, hype, deception and nonsense - and you are left with *nothingness* - like going into the cafe to meet Mimi (whom you want to bed, and have arranged to meet there...), and when you get there, the cafe is just full of non-Mimi's, and all you experience is *nothingness". 

Sartre solved the problem of defining "nothingness" just like Renoir solved the problem of painting - they both led with their ...  well, look up what Renoir said.  He was asked once, "How do you paint such amazing pictures?"  and his reply was: "I paint with my prick."  Of course, he was being metaphorical.  But look at his paintings... his "Woman at Bath" type pictures are like looking thru a keyhole at your teen-age fantasy - like Leonard Cohen's and Bob Dylan's songs were all about.   Let's be honest.  History is all about frustrated, clever men, trying to get to something they cannot reach in just one lifetime.  Women are already there.  They don't need to do anything - they need only sit there and wait, and everything will come to them. (Just kidding.  Of course, it is tough for girls.  This is my way of telling them they need to do more than just "lean in".  The modern world is awash in disinformation.  Women - just like men - need to vacate their comfort-zones, and work hard when they have the chance.  They don't really need to (like all men must), but they still probably should anyway, as they will become better people for doing it.  But if they don't want to, we must all understand, that the don't really need to.

It is men - like the frogs and worms that always migrate after a warm summer rain - that have to set out on long journey's of discovery.  They have little choice, really.  History and civilization - both are made by men, and always will be.  Women have the luxury to be concerned with bigger and more important things.  They make people.  Men can invent things, and build nations, win wars, and create empires, and create great works of art and literature.  But their accomplishments are nothing, compared to the female-ability to fabricate new people.  This is what history teaches us.  We ignore this truth at our peril.

And let me be clear.  I dislike "feminism" because it is all about women being strong, right?  Well, that is great.  We all like strong women - get your dragon tattoos and kick all the Swedish hornets nests you want to.  But what about others?  Is the strong-woman of feminism supposed to make men weak?  Of course not - men are supposed to be strong also.  Great.  And quite correct.  But take a hard look at the political movement you are thus creating (and which we have been enduring since the bra-burning marches of the 1970's).  What have you got?  Looks a lot to me like it's just a new fascism - since if everyone has to be strong, then the weak are typically just thrown under the bus - or worse, subjected to the paternalistic policies of toxic, dishonest governmentalists.  Policies whch are often worse than the bus tires.  (I am very Chicago-School on this.  Paternalism is bad.).  History suggests that "feminism" is not a nice thing.  The idea that women are at the "bottom", or subject to "glass ceilings" is absolute utter nonsense.  And political models which suggest that strong, successful females need to be stronger and more successful because men are somehow holding them back (which is absolutely not true), simply risk creating an ugly, vicious and deeply dishonest toxic cult of the "strong".  We have seen this movie before, and it ends badly.  Women are already in control of the production of human beings - and this is the most important political and economic role any group could possibly have.  They already have more effective real power than men, and a much larger range of opportunity as well.  A philosophy which suggests they are "downtrodden" is simply rubbish, and needs to be recognized as such.  Everyone should be strong - and strength should be used wisely, and with good judgement.  Men and women equally, must learn and then act upon this simple truth.  And it annoys me when political movements are crafted to vector people away from this important reality.   Can I hang up the tinfoil hat, now?   :)

History also teaches us that luck is very important. You have to go out and actively seek and search.  But the role of randomness is much, much greater than many would like to admit. 

You probably won the trade, or got the girl, because of chance. 

Sure, you worked hard and were prepared.  That is a precondition.  But your success in a particular venture is often due to luck.  You won because you were lucky. Not clever, not smart, not the wonderful fellow you think you are - you just got lucky.  Smart guys even call it that.  How was your <date / trade / life>?  Did you get lucky?

Luck really matters.  It is the secret that history teaches us.  And to make luck work for you, one does this:  Design and carefully implement strategies that enhance and augment your chance of getting lucky.  Keep your costs of each "trial" low, and try to run lots and lots of trials.  Move forward on the ones that seem to be working well, and drop quickly the ones that are not.  Do not become attached to any particular trade, person or outcome.  At least not so attached that you cannot quickly and painlessly disengage.  Don't be the Ant, be the Grasshopper.  The Ant worked hard day after day, and built a huge nest.  The Grasshopper listened to music and practiced flying and dancing all day.  When the weather grew colder, the Grasshopper flew south.  The Ant dug deep into her nest, and her entire colony  was wiped out when the bulldozers came to build the new subdivision for the humans.  That is the true story history teaches us - not the false story about "hard work" we learned as children.

Working hard, and expending effort offers absolutely no assurance of a successful outcome.  It might be necessary precondition, but the key to life and success is to work SMART.  Your outcome will depend on the decisions made, and path you took, much more than on how much effort you expended after the choices were made.  This is what Artificial or Machine Intelligence can give us, if it is done right.  It will augment and enhance our chance of being lucky.   It will help us find where the gold is buried, and where the even more important gemstones of truth are located.  SImple, but effective.  The Kepler Telescope research that used TensorFlow to assist astronomers in the discovery of exoplanets is a fine example of exactly what I am describing.  So is the use of AI classification to improve cancer detection and diagnosis.  No one was put out of work, and no one will be.  The work simply took the results already found and developed by the analytic specialists, and allowed their skill and expertise to be augmented and amplified.  

AI should help us achieve beneficial outcomes using this approach of ability augmentation and amplification.  History shows us that this is what humans have *always* done.   In weaponry, it is clear - the club and spear amplify the fist and nails, the gun amplifies the spear and arrow, and the tank amplifies the mounted horseman.   Writing amplifies memory, and libraries augment information management. 

Computers and the internet augment and amplify the library and the card catalog.  And AI will amplify our ability to classify and analyize.  And if it is done right, it will not cost much to operate, and it should help us all get a bit more lucky.  Sounds like a good plan, eh?  Cool

[Dec. 30, 2017] - daylight hours - 12:44pm EST:  Pulled a student-style all-nighter - not planned, but you just get pulled forward by results..  If you look at my scribble-notes, you can see the version of TensorFlow that I installed was Ver. 0.10 from last year.  Current version of TensorFlow is Version 1.4.  I got that installed and working on the hybrid CentOS-7.4 box last nite (whoo hoo..).  I ran into bugs in my Python 2.7 version and here is where being a stubborn bugger pays off - I had compiled and linked my own version of Python (CentOS 7.4 provides 2.7.5, but I download the tarball, and built my own to have most recent 2.7.14.  The backports of Python stuff are pretty damn well built. Big (I mean BIG kudos to the folks who have made Python what it is.  The Python stuff really *works*, and so, apparently, does Tensorflow 1.4 now.  Kudos to Google and their team also.  Bloody good work guys, I am gobsmacked-impressed.  This stuff really exceeds expectations.)

Anyway, what I had to do was rebuild Python 2.7.14 with "usc-4" instead of "ucs-2" (which is the default), because TensorFlow "pip install..." binary is built (from bazel, I guess?), using the usc-4 varient of the unicode stuff).  You rebuild Python and give the ./configure step the option: "--enable-unicode=ucs4", and that will fix it all.  Except you *also* have to uninstall and then re-install all the bloody Python libraries... which is a lot of work. But if you are a stubborn guy (and just do it), you can then "pip install <the big long ... .whl>  site-URL of TensorFlow 1.4, and it will install.  I had numerous places where I got bugs while testing - but they were *all* because of that usc-2 instead of usc-4.  I just had to keep uninstalling and re-installing the Python packages until all the bugs were found and purged. 

The original "Laplace" pond-ripple simulation (which I converted into star-formation simulation) provides a good quick test for TensorFlow, as does the simple linear regression model.  TensorFlow has been "new and improved" to make it necessary to explicitly choose active run-session (just using default.session does not work anymore), and so I had to convert the Laplace example to run under TensorFlow 1.4.  I don't know TensorFlow yet, but if Dr. Hinton had a hand in it, it is probably the way to go.  I was *very* impressed by the Kepler Telescope results to find new exoplanets, as it is a perfect example of how machine-learning will augment experts. 

Mark Langdon's predictionTHERE WILL BE INSIGNIFICANT UNEMPLOYMENT CREATED BY MACHINE LEARNING & AI-BASED AUGMENTATION.  MORE JOBS WILL BE CREATED, THAN WILL BE LOST.  I AM ABSOLUTELY CERTAIN OF THIS.   Sure, we have a lot fewer "chimney sweeps" now in London, than in we did in early Victorian times.  But we have a *lot* more electricians. And more of almost every other job, too.  Progress creates opportunity, and very rarely destroys it.  History shows us this.  Learn this truth, people.

So, I converted the Laplace examples, and also ran the Linear Regression example - under both IPython-Jupyter, and under just plain, command-line based Python, and got it to work, same as I had it running on the Apple MacBook, back in March of this year.  Build your version of Python with "--enable-unicode=ucs4", and uninstall and re-install almost all the Python libraries, and you can load TensorFlow 1.4 into CentOS-7.4+Python-2.7.14, with this command:

   pip install

Then, use this little test program (which Google provides on it's TensorFlow "Install" section of it's site:

     # --- test TensorFlow Install, with a "Hello World" example
     import tensorflow as tf
     hello = tf.constant('Hello, TensorFlow!')
     sess = tf.Session()
     # --- done.

You may get warning note about not using all the features avaiable on your CPU, and you should see the "Hello, TensorFlow!" message.  You don't need to start the whole Jupyter web-server thing, just enter, from Linux terminal command line: 


Hope this is useful. (Apologies if the font is f*cked up here on this note.  The editor for my website is not good, and I don't have time to debug why the font is messed up (it is way wrong in my Firefox versions - both on Windows and Linux... :D )

[Dec. 30, 2017] - Just made my deadline, as I wanted to get Fedora/CentOS Linux 64-bit + TensorFlow working on a new platform before 2017 was done.  Top screen-shot shows the final "pip install ..." of TensorFlow for the newly-built CentOS-7.4 platform, which is tweaked to run the latest 4.14.9 Linux kernel  (but I suspect the stock 3.10 Linux kernel that CentOS 7.4 comes with would also work.  That 3.10 kernel does not support the sound-card, only reason I hacked it to use most recent one...).  Standard "pip install tensorflow" fails, with just a "Go away!" error message, but after a lot of reading, I found reference to the correct pip "wheel" file one can use to make it work right.  I will create better notes shortly, but second image is just a copy of my scribbled install notes, which show steps one can take to get the thing running.

[Dec. 29, 2017]  - Just confirmed that the 4.14.9 Linux kernel I am running (downloaded from ELrepo site via a yum update command), is the latest stable Linux kernel.  I had not planned to be this bleeding-edge, but it does seem to work.  Successfully compiling and linking MPLayer (with libdvdcss and libdvdread), and confirming it worked to play a DVD, with sound and in a controlled manner (ie. volume works, subtitles can be toggled off and on with "v" cmd, space-bar pauses playback, etc.), is a comforting result.  If you can play DVD's successfully, then system operation is probably stable.  Just learned about the Linux kernel archive, at:    and another useful site is:   Learning curve, here.   I've tweaked "grub2" to default boot the newer kernel.  Grub, and grub2, are programs that let you select what operating system to boot. Their setup is more complex on 64-bit CentOS 7.4 Linux.  The Pulseaudio hack is pretty extreme also.  Unless I disable it by renaming the executable in /usr/bin, it gets spawned automatically, even if you use root and the process-kill command to kill it. And run from /usr/bin, it will not find the on-board sound card, in either kernel.  But at least in the 4.14.9 kernel, it will work if local started using the "--start" parameter. 

I want to run thru and put all the Python+Jupyter/IPython stuff from L2-AI onto L2-CentTOS74, and try to get TensorFlow to install. Also need to find the Kepler stuff, if it is available yet.   Saw details of a Qcon.AI conference in Cali - San Francisco, actually, which looked interesting.  Might be worth the expensive price, just for the TensorFlow tutorial.  Also want to try PyTorch, and I need to update the price-database.

How to Get the 4.14 Linux kernel - using the ELrepo repository:  These are the  4 steps I used to access the "ELrepo" repository, and get the latest stable kernel.  (Did these as root. Should probably use "sudo" instead...)

1)     rpm --import

2)     rpm -Uvh

3)     yum --enablerepo=elrepo-kernel install kernel-ml

4)     yum --enablerepo=elrepo-kernel install kernel-ml-devel

And to update grub so it would default boot the new kernel, I selected the first (newly installed) O/S in the list (note the list of O/S's that "grub" shows is zero-origin, meaning the first one is numbered "0", not "1", like you might expect...)


       grub2-set-default 0   (or whatever number you want. 0 is first one in list...)

then, rebuild the original grub.cfg file in the boot directory...

      grub2-mkconfig -o /boot/grub2/grub.cfg

This changed the default to the new, 4.14.9 Linux kernel.  The original CentOS 7.4 kernel is still there, and you can selected it with a down-arrow key at boot time.  But it cannot recognize the Intel on-board sound processor.


[Dec. 28, 2017] - CentOS-7.4 build:  Ok, big result.  This thing did not work well at all out of the box, but  I have got most of it running ok.   Things that had to be done:

   1) Make the startup boot show activity, not just a blank graphic screen with a throbber.  Do this from root, with "pymouth-set-default -theme details" and then run "dracut -f" and then reboot.  At this point, I am getting a full hang if I try to reboot. I have to run "shutdown -h now" and halt the machine and then restart from cold.

 2) Get the ethernet card working.  For some reason, despite "chkconfig" showing the network on at runlevel 3, the network was not coming on.  I had to edit /etc/rc.d/rc.local" and put "ifup eno1"  into the rc.local file. The /etc/rc.d/rc.local file also needs to be made to allow execute permission with "chmod +x /etc/rc.d/rc.local".    Once that was done, the network is available by default. (Note: I also set static ip#, using ipv4.  Not sure where systemd puts the config scripts. Did it using "Settings" option in GNOME gui)  Note: This should be a unit in a systemd probably - but for now, the rc.local file still works.

3) As mentioned, I had to disable /usr/bin/pulseaudio  (renamed it to "pulseaudio_off", and start pulseaudio locally, prior to GNOME3 start.  Created two files, "sound" and "vision".  Sound contains "./pulseaudio --start" and is run as a non-root regular user, from local copy of pulseaudio.   It also plays a couple of .wav files using "aplay <wavefilename>" , just to provide confirmation that the sound works.  The vision file just contains "startx"  (not the original "startx &").  This is a kludge - but it got sound working.

4) Got Mplayer 1.2.1 working with DVD's, which was a trick.  Had to completely rebuild Mplayer, because it was not using "libdvdread" library file.  The key is to first build the three libraries (libdvdread, libdvdnav and libdvdcss), and for first two, put their *.pc files into directory /usr/share/pkgconfig.  Then, build MPlayer with "./configure --enable-gui". "make" and "make install".  To get Mplayer to load (which uses, the lib that lets DVD's be read), I had to add "/usr/local/lib" as a line of text in the file "/etc/".  This lets the linkage step of Mplayer find the library, so that it can read DVDs. The MPlayer command to play a DVD is:

   "mplayer -fs -alang en dvd://1 -dvd-device /run/media/<youruserid>/<yourdvdfilmname>/VIDEO_TS"

(The DVD will be automounted to the /run/media/<youruserid> directory.)

5) Confirmed all this works now.  I am editing this note here for the Gemesys Ltd website, using version: "Firefox 52.2.0 (64-bit)" on the CentOS-7.4 64-bit box.   It all seems to work.  It's 3:20 am, Thursday, December 28, 2017.   The CentOS-7.4 box (with the 4.14.9 kernel) is snappy quick, and now has sound and vision. 

[Dec. 27, 2017] - CentOS-7.4 build:   major milestone:  Kernel 3.10 is reported to not work with the default Intel on-board sound card I have.  Fiddled about with "alsamixer" to select the sound card it recognized.  Had to disable - and then re-enable "pulseaudio" (I hate that thing) by rename the executable in /usr/bin, so the "systemd" stuff would recognize the sound card.  Sound is critical, since without sound, videos and video data-links are kinda useless, since I don't speak sign-language (except to one of my dogs, who is deaf).

Went to the "ELRepo" site ("Enterprise Linux"), and tried to download the next kernel up from mine, which is reported to be 3.17, but it offered 4.14  (!!), and I decided to go for it, since this *is* a research platform.   After getting the gpg-keys the yum install-ing the new kernel and the devel-kit, (for it's headers), it turns out the new kernel does not overwrite the old one, and "grub" reports two boot-able Linux kernel's now.    Extensive studying so that I could change the default grub-boot from the old 3.10 to the new 4.14.

This was a *lot* of work, and I will have to document it in a seperate page.... (notes are a mess)... but the new kernel *WORKS* with the sound card, and this is a big result. [UPDATE:  AHAH!  GOT IT!  You just need to add your local <userid> to /etc/group file, after the pulse-access group definition!]  Note: [later update, evening..] not enough.  You also need to localize the startup of Pulseaudio, and remove it from /usr/bin.  Kludgy, but I finally got it to work in Gnome.]

As a preliminary test, I migrated the source for the new MPlayer that works with DVD's, from the L2-AI box over to the new machine, L2-CentOS74, (but now running the new 4.14 kernel), and rebuilt libdvdcss, libdvdnav and libdvdread, and then *all* of MPlayer 1.2.1.  Note: I used "--enable-gui' option on Mplayer, and had to create /usr/local/share/skins/default (with a default skin directory, taken from "Blue"), in order for Mplayer to start.  Also, is put the *.pc files (../pkgconfig) that I copied to /usr/lib/pkgconfig on L2-AI into /usr/share/pkgconfig, probably the place where local-built  *.pc files  (for libdvdnav and libdvdread) should be to be put.  This lets the MPlayer build to know that it should build with DVD-read ability.  Didn't test DVD's last nite, but I confirmed the Mplayer plays *** WITH SOUND ***, and runs full screen.  The build was blisteringly fast, compared to L2-AI (an old 32-bit P4).   This is all a big result, as it looks like I will be able to have a working 64-bit Linux platform.  Documentation to follow.  The site where the fix for the "NO SOUND ON CENTOS with INTEL on-board sound card" was found is:

about halfway down, where "marshallruan" explains his kernel-update process, which is dated: 2014-11-24.   Details on the "ELRepo" site are here:

You need to make: "/usr/local/share/mplayer/skins/default" directory and put the contents of the "Blue" skin into this directory, for Mplayer 1.2.1 to run correctly as a gui - ie. as a clickable desktop icon.  It won't work otherwise, and will just bring up a box saying "Skin not found".  I confirmed you can get the "Blue" default skin (by "Xenomorph" ), from the Debian site.  Get that "...tar.bz2" version, put the file into your MPlayer build directory, and untar it with "tar -xvf mplayer-blue...." to build the "Blue" directory.  Put the contents of that directory into "/usr/local/share/mplayer/skins/default" so that Mplayer can be run from an icon.  This "skins" nonsense might be the most retarded thing I have ever seen in the entire history of software development.  FFS, guys.  Really.  I believe the latest "Mplayer" does not use this silly idea. But if you are using Mplayer-1.2.1, you need to do this, to make it work as a desktop, gui-application.   You can get the default "Blue" skin at:

and again, Mplayer will throw an error box saying "No Skin found" without these little .png files.  Oh, you will also need to put a .ttf font file someone that the Mplayer can find it. 

Update: [Dec. 27, pm] - got it!  Got the sound working..  Here is the bugnote (all the way back from Fedora-8, and the wrong stuff (but very helpful) from Pulseaudio site:  and    For now, it looks like all I needed to do, was PUT YOUR LOCAL NON-ROOT USERID (your main working userid) as a MEMBER OF THE pulse-access GROUP.  DO THIS BY USING vi TO EDIT /etc/group , FIND THE string "pulse-access", and if your user-id is boofus, then just append the string: "boofus" to the end of the line, making "boofus" a member of group "pulse-access", and sound should work, without the authentication error.  This problem appears when root can see the sound card, but your regular userid session cannot see the sound card (using " aplay -l" ).

[Update: Evening: .] I could get the damn sound working, but when I shutdown and restarted, it would always be dead again.  Finally, by hacking the startup of the thing, and making both root and my <localuserid> members of groups "pulse-access" and "pulse-rt", I am able to start a sound-server and then the GNOME desktop, and get sound.  It is a kludge, but what I did was to disable the systemd-started version of Pulseaudio (in /usr/bin) by renaming it to "pulseaudio_off", and then putting a copy of pulseaudio program  in my local directory of my non-root userid. So, in my own directory, I have the pulseaudio executable.  There, I start it *outside* of GNOME, with "./pulseaudio --start", which complains about not having a binary canonical version, but it force-starts it.  You check it with "aplay -l" and see that you have a sound card visible.  Use "aplay <somefilename>.wav" and confirm you have sound.  (you should here it, on whatever you have plugged into the HP headphone jack).  Then, start GNOME with "startx".  I always used "startx &", making a detached GNOME process, so I could Ctrl-ALT-F8 out and see what GNOME/X-Windows was doing and reporting .  But here, you can't do that, else GNOME cannot "see" pulseaudio or the sound card, and the reports no sound card (or a "dummy" non-operational sound card.)  Either way, no sound.  But if you start with just "startx", it looks like you inherit the permissions from the locally force-started version of the thing (ie. you have started a local copy with the "--start" parm).  Before running GNOME, check with "ps ax | grep pulse" to confirm pulseaudio is running, and make sure you have added your local userid to the "pulse-access" group in the /etc/group file.  Now, I can start GNOME with "startx", and have the Intel onboard sound-card work (the box is an HP-7600 small-form factor box, 4 Gb memory, 250 gb disk.   Even though it is only Intel-Core-i3 (4 CPU's), running at 3.04 ghz, it still is very fast.  And with sound, it can render videos correctly (using MPlayer).  Check sound using the "Setup" tools, and run the "Test" option to hear "Left Speaker", "Right Speaker".

Note: One ugly problem. Since moving to using the Linux kernel 4.14, I am now getting a complete "hang" if I try to reboot.  This is a problem reported by many users, in many different Fedora bugtrack reports, back in 2012 for earlier many Fedora versions - kernel 3.3x, if I remember correctly.  It apparently has resurfaced.  The older 3.10 kernel (which cannot see the sound card), reboots without problem. 

[Dec. 26, 2017] - The "Nightmare During Christmas - My Attempts to get CentOS 7.4 to Actually Work...".   Hilariously trying to get the CentOS-7.4 box to work as a useful Linux box, and it has "systemd" as the internal process initiation and control system.  And, ah, how can I put this... nothing seems to work right just yet...  It is comical.  GNOME has been wrecked so you can't easily create a program "launcher" icon, there are *NO* system-configuration utility gui-windows that let you configure system operation, configure a sound-server, or set up your network cards, and there is no facility to configure the SELinux boolean values.  First impressions:  CentOS 7.x is a brutal change.  The GNOME-3 + "systemd" approach seems to have effectively destroyed the "root user" configuration features that have been evident (and very necessary) since the days of Fedora-9, and even before.  To put icons on the desktop, I had to start a command shell, become root, and copy *.desktop files from /usr/share/applications/xxxx.desktop over to /home/<userid>/desktop/xxxx.desktop, where xxxx is the application name.  This is just a bit silly.

[Dec. 21-23, 2017] - I Put Linux and Python/IPython Tck/Tk/Scikit-Image/Pillow (for graphics), and Jupyter (with it's webserver running locally under Linux), plus all the libdvdcss+libdvdread/libdvdnav libs with MPlayer 1.2.1 on a Toshiba 4340 laptop I had that was gathering dust.  The machine is a rugged laptop built on a Pentium III processor, so I did not expect it to run everything - but it does.  It will even play DVD's, (once the proper code in libdvdcss library is installed.).  MPlayer is a nice collection of code, and as it is built on ffmpeg, which has access to all the codecs, it can render many different formats.

<See the "Mplayer: Play a DVD" section for details>

This needs to be said clearly:  Running Linux as your operating system lets you use your computer as a computer again. Otherwise, it remains owned by Microsoft or Apple, and operates only at their pleasure, which is just a bad idea for any person or business.

Having a solid MPlayer is good.  I can actually play all the .MOV files I took with my little Kodak digital camera several years back, as well as videos shot using my Blackberry tablet and my Huawei phone.  The plethora of video formats and related codecs, combined with the hostile American DMCA and the damage it has done, has held back video from being used effectively on the internet.  Copy-protecting and "streaming" video content is quite simply absurd.  The fact that I have to download, compile, link and install dvdlibcss, dvdlibread, dvdlibnav and MPlayer (all excellent products, certainly), just to render a DVD that I have already purchased and paid-for, on a computer that I own, which has been sold with a DVD disk-reader installed, is a form of social-dysfunction.  There is nothing more tragically silly, than writing important books, building large libraries, and then locking them up, so no one but a very-privileged, very-few can read what is in them.  European society did this during the horror-show that was the "Middle Ages", and it is sad to see the same greedy, selfish foolishness being repeated now in the digital age.  I read where Berkeley University just took 20,000 video lectures down, due to some legal dispute involving American "accessability" laws.  (The law apparently was going to require them to make equivalent versions available for "disabled" people.)  So even without the copyright rules and the DMCA, there are now other ugly ways that useful content can be suppressed, using the foolishness of modern American law.   The future of the "open internet" is looking a bit grim, as the forces of cruelty, greed and ignorance continue to gain the upper hand.  Try to stand against this tide, if you can.  Be Canute, and reverse it.  Folks who spout about history forgot that for those who waited a while on the shore with their King, the tide did actually recede, just as Canute had ordered.  He got wet, but the tide went out, didn't it?   Everyone misses the point of the Canute story.  Maybe nothing we do really matters, and we have no power at all over fate - but maybe it does and we do, right?  Our job is to try.  The King had to explain to his people the importance of trying, the need to be patient, and how one can achieve a desired result - even against the most powerful forces of nature.  Canute was not a fool. His Knights who were smart, and patient probably learned the lesson he was trying to teach.  The foolish superstitious ones who expected a "miracle", and were disappointed, would show themselves to be disloyal and useless - the wise Knights who stood in the water with their King, knowing it would recede, would show their strength and loyalty.   And the King would now know to which group each man belonged.  What Canute did was brilliant - especially in an age of ignorance & superstition that was so dominated by the awful foolishness of religion.

I am anxious to replicate the Kepler code, and really want to try to get Tensorflow running on a Linux platform, rather than just the Apple Macbook.  This means setting up a 64-bit Linux machine, probably with CentOS 7.x, given the success of the CentOS 6.6 testbed (which runs a Ruby-on-Rails server, as well as the Python/IPython/Jupyter/Tcl.Tk research environment).  Experiments with "networkx" have been successful, and I am looking at installing and configuring PyTorch (the Python Neural-network based machine-learning framework.).  PyTorch really appeals to me, but so does replicating the Kepler research material using Tensorflow, as a detailed review of their approach should provide a good learning example and operational prototype for what I want my market-intelligence AI to provide.

[Dec. 14, 2017] - Listened to the NASA news conference re. the Kepler Telescope discovery of additional planets around the K-90 star, which made use of TensorFlow machine-learning to indentify lower-intensity light signal changes which gave evidence of existence of two new planets.  The use of a Tensorflow neural-network trained using data from human astronomers is very interesting, and is documented in this about-to-be-published paper:

Very significant discovery (first planetary system found which has as many confirmed planets as our solar system), and important because of the successfull application of the  machine-learning technique employed.  The two researachers, Christopher Shallue and Andrew Vanderburg used a training set of only 15,000 cases, where human astronomers made decisions on what they found, and their network demostrated a 96% accuracy (they held back 10% of the training cases to use as a verification suite.).  I want to see their code.

Also: Last nite, I built a new version of MPlayer, a tool for screening video files - and included the necessary "libdvdcss", "libdvdread" and "libdvdnav" libraries (which I built from source), so that MPlayer could render DVD disks correctly.  I documented the steps I took on an older, 2.6.27 Linux kernel, and confirmed that my older, 32-bit Pentium 4, running with only 2 gb memory, and at 2.40 ghz, could render a commercial DVD from start to end, flawlessly.   I have also built Jupyter-IPython Notebooks for this machine, but have yet to install Tensorflow successfully on it (as it must be assembled from source, if it is to work on a 32-bit machine).    What is interesting, is how surprisingly well this older platform works.  I "git cloned" the latest "libdvdcss" code from github, and downloaded .tar.bz2 tarballs for the "libdvdread" and "libdvdnav" code,  and also compiled and installed them from source.  This let me re-compile and install MPlayer, which then was able to render a commercial DVD successfully, at high-resolution, and without any evident problem.  Better than my "Region-code=0" DVD player, actually.  Interesting result.  I documented the steps in the section on this webpage called: "Mplayer: Play a DVD"

[Dec. 13, 2017] - Took another look at getting TensorFlow to run on the Linux boxes - which are 32-bit.  Just compiling Bazel from source, and installing JDK /Java 8 looks like a dogs-breakfast, so we will just drop some cash on yet another box.  I want to try TensorFlow, but it is a nightmare to build it from source, because of the Bazel stuff, according to anything I can find online about this process.  I really want it running on Linux.  

I had an absolute pain messing around with different versions of Xcode just to get my DOSbox stuff to compile (and not blow up with an Apple-inserted time-bomb), and then deploy to the jailbroken iPad.  I don't like this vendor-restrictive nonsense.

I did finally get the TensorFlow "Laplace" tutorial of pond-ripples (my modified case: exploding stars... as shown, screen right) running on the Apple Macbook, and once I installed "networkx", I got the graph stuff to run with Tcl/Tk display.  I did not realize that it seems to be bundled with Python, so I don't need to install Tck/Tk seperately. The histogram examples in Jupyter+IPython Notebook work also, and render the histogram to a seperate Tck/Tk canvas, which is nice.  I am running Python 2.7.10 on the Macbook at the moment.  The attraction is that the ssh/scp stuff works, as well as the full Tensorflow is installed (but it the version as of this March, 2017 - not the most current 1.4 version.)  At least I can try the MINST tutorial, I am hoping.

Microsoft can be annoying, but Microsoft is like a helpful public utility, compared to Apple.  I was able to install from source, all the stuff I needed for Python and IPython Notebooks+Jupyter, and build "scikit-image" from source, because Microsoft made version 9.0 of their Windows C++ compiler for Python available as a free download.

 That was really good of them to do that.  The scikit-image stuff would not build, until I found a special "stdint.h" include file, and dropped it into the directory of compiler includes, but when I did that, everything could be built - even on my old Windows-XP 32-bit SP3 box, and I have now the same Python 2.7.14 environment (with scikit-image, matplotlib, numpy, scipy and even _tkinter + Tcl/Tk) running on Windows boxes, that I have on my Linux boxes.   But even though I have Python 2.7 running on the iPad, I can't get anything to install there because of the Apple sandbox thing that fires the "kill-9" switch to prevent any compile from running to completion in a build chain.  I have gcc and Python, but it is not possible to "pip install" any of the needed libraries, even with the jailbroken iPad.

Its good to finally get the Jupyter-IPython Notebook stuff installed and working on the Macbook, and run the first tutorial for TensorFlow correctly.  I am also very interested in Google's offering of the TensorFlow "Lite" for operation on Android-style device.  This is what I have been trying to do with the jailbroken iPad, running the APL stuff, which can do the Xerion matrix calcs, to execute the neural-net against immediate data.  The technology worked, but the forecasting of the AI was no good. But this TF-Lite stuff looks useful.

I read with interest the expected announcement re. the NASA Kepler telescope.  The Kepler array has discovered over 2000 confirmed exoplanets, and they look like they used TensorFlow to hack thru the massive telemetry data they have, and think they have found something interesting, which will be announced tomorrow.  Some are suggesting they have found space-aliens on one of the worlds - ie. a signature of electromagnetic variations that suggest an advanced radio-transmitting society might be evident on one of the exoplanets identified.   Or more likely not.  The whole "we have found aliens" story is probably complete fabrication.  Probably NASA has just found evidence for some phenomenon that they will assert needs to be investigated further - always a save bet for a scientific announcement.  Who can argue against that?   What was interesting to me, was that they apparently hacked away at the data with Google AI products, which probably means TensorFlow.  It would be nice to get the f***ing thing to work on something that is sufficiently open that it allows some real work to be done - like some version of Linux.  Using Apple software is like trying to do research physics on the factory-floor of a closed, union-shop with employee work-rules.  Macbook and OSX is a dumbed-down, locked-down journeyman's platform for reporters and lefties. I am seriously thinking about wiping the Apple O/S off the thing, and installing CentOS 7.x on it.  If I knew for certain that the wifi would work, I would do it right now.  At the very least, I wish I could throw away their desktop software and replace it with Gnome or KDE or something that did not make me frustrated when I try to use it.

[Dec. 08, 2017] - Crawling up the curve..  Three runs at forecasting (first on a DECsystem 20/20, then Xerion+Slackware back in the '90's on a P/C, and most recently, Xerion+CentOS/Fedora on the homebrewed Linux-LAN - all have demonstrated that short-range forecasting does not work.  So, where am a I getting my edge?  Thinking about GIbson's "Neuromancer" (and reading the great Wikipedia summary), took me back to my days at SMI.  I had written this Black-Scholes option-calculating stuff, and transcoded a thing called EVS into their research efforts.  The EVS thing was a cool, cross-sectional polymorphic database of everything - all major public company financials in Canada - and we got the kids to maintain it & keep it current.  I tried to get Jim to read Neuromancer, but it was where his ego defeated him - he read a few pages, and said it was not well written.  He was wrong, of course.  It had won all the awards in 1984 - Hugo, Nebula and some others. (Like taking the Pulitzer & the Booker prize, for those of you who read emotion-novels).  Time magazine put it on the list of the top 100 books written in the English language since 1923.  Jim was my client for the EVS thing, and he, more than anyone outside of book-reading, was the one who taught me how to hack the stock market - a useful skill, which has allowed me to pay the bills.   I am grateful to him for his advice & suggestions, and the books he gave me to read.  I had given him "Soul of a New Machine", and he had just loved it.  I was sad he did not want to read Neuromancer, because I knew - in the strange way that I *always* just know some things - that it would cost him something not to read it.  He didn't see that picture of the future we are now embedded within.  He died in a hotel-room in Ulan Bator (now spelled UlaanBaatar in my Rand McNally Atlas), after attempting to negotiate settlement for his mining company that had had its uranium mine siezed by local Government-types.  His company had won their case at the International Court and were trying to collect 100 million USD's they were awarded.  But he died, and I feel some guilt, because I should have insisted he read Neuromancer.  Gibson had painted this absolutely wild picture-of-the-world; a piece of fiction about a violent, complex, dangerous, absurd and supremely fascinating future where technologically-hyper-enabled folks are doing battle to achieve their private objectives.  It's a world not unlike where we are now.  I am a risk-taker.  But even I would not go to UlaanBaatar alone, and try to get 100 million USD's from Russian-enabled Government gangsters who had just siezed a high-value uranium mine, which offered them nuclear independence.  Geopolitically, Mongolia is the meat in the sandwich between an expanding, aggressive China, and a commercially successful and improving Russia, eager to maintain its status in the world.  His death was ruled "natural causes", and maybe it was.  But death is death.  I read everything he gave me, and it changed my direction.  Jim was the one who put me on to Hinton's lectures, where Dr. Hinton and his team were offering Xerion-source to any citizen of Ontario who wanted a copy.  I built this impressive process to forecast currency futures.  It didn't work either.  But I learned a lot, including Slackware-Linux, which I knew would be the future of computing.  The internet was just beginning.  Everything was possible, again. 

And now, I am back to the very first idea I had - reading and processing natural language files, creation of directed-graphs, adjacency-matrix calculations, etc. to attempt to build an AI that replicates what successful investors seem to be able to do.  Expert systems with fuzzy logic, inference-engines, and so on.  If you can't forecast, you must at least be able to create some flight-instruments that can help one fly on IFR thru a cloud-scape that is often trying to kill you.  And I still don't know if I have been smart, or just lucky.  Jim would understand that.  We all want to be Nero Tulip, and score the mega-dollars.  But maybe just being Fat Tony, and eating and living well, is the best that we can take from the markets, and still enjoy our lives.  I know I can't beat Wintermute.  But wisdom comes when you realize you don't have to.  And this wisdom can let you live long and prosper, which of course implies that you also get to avoid premature death - an important executive outcome, no?  (Hey, kokimashta, ne!)

[Dec. 06, 2017] - got the "scikit-image" library installed in Python on the CentOS Linux box and the Windows box.  Was not planning to do Windows/Python configuration, but I started as an experiment, and found it was possible to download "Microsoft Visual C++  9.0 for Python 2.7" directly from Microsoft, as well as the associated runtime libraries for C.  This lets the "pip install scikit-image" command run to completion (it dowloads and compiles a lot of material).  I will post a document on how I did all this.  You need to get a "stdint.h" header file, and copy it into the "include" subdir for MS-Visual C++ 9.0. 

[Dec. 05, 2017] - Quite a project getting Jupyter+IPython setup and running nicely on the Linux boxes and laptops.  But it has been successful.  Really a bit of a learning curve to climb, as there were many utilities that needed to be installed, updated and configured. A modern Firefox browser is needed, because Jupyter (used to run IPython (interactive Python) Notebooks) creates a token-authenticated webserver which communicates with the browser to run.  The gain from this approach is the browser gui-toolsets are used to paint images, ie. the matplotlib can be used to generate impressive graphics from non-trivial datasets.  Example: on a dual-core ACER laptop running a Linux 2.26 kernel, I could generate 100 million pseudo-random variables, run some calcs against them, and plot the results into a stacked-histogram (to see a normal distribution), in roughly 15 seconds.  The Python numpy lib is the old Fortran IMSL routines, which are fast, well-written, and correct.  One just starts a terminal shell in the Gnome/Xwindows desktop, and enters: "jupyter notebook" and **!shazzam!**, Firefox fires up with a Jupyter webserver running in background, and an interactive iPython session ready to go, and you are looking at a directory tree, in gui form, on localhost/8888.  You load up an iPython notebook (an *.ipynb file) from the browser window, as easily as loading an Excel spreadsheet, execpt you can work with literally billions of datapoints, if necessary.  This changes things.  The average research guy can use a computer as computer again, instead of just running it as a clever typewriter with big document storage.

[Nov. 27, 2017] - Learning Python, created some experimental images directly from code.  Other than using AI technology as a tool to assist and augment my trading and investment efforts, I have yet to see a viable role for an AI application that a small-holder can use to make an economic gain.  If the AI tech cannot offer me some form of provable "edge" in short and medium-term trading/investing activity, then I may be at an impass, as other viable avenues for commercial deployment are not immediately evident.  Also, I had an insight:  As the internet becomes dominated by AI harvesters, we can expect the rise of human "ratf(_)kers" who will act to increase both the amount and the dispersion of false data and deliberate inaccuracy.  And this should be seen as an evolutionary adaptive response - not necessarily a bad thing. Its like animals evolving camouflage.  I put an article on LinkedIn about this...

[Nov. 20, 2017] - Spent a chunk of time building a lot of Linux utilities from source (curl, cmake, git, tcl/tk, MPlayer and such) - tried to get Krita artwork-image prgm working on Linux - had to give up and get the .dmg file for MacBook, and install it on that platform.  See the Krita section for details.

[Nov. 17, 2017] - Some milestones for new V2 work:  Got Python and libraries (especially the image, numeric and scientific libraries) working on Linux laptop.  Had to download and build Python from source (using 2.7.14, latest from  Then had to get openssl-devel stuff, (and do a machine reboot! very important), and then could run "python", (after I found it at and downloaded it) which intalled pip, setuptools and wheel.  Then, I could "pip install numpy", and scipy and Pillow. (Note: use Pillow, even if running the Python 2.7.x branch, don't use PIL). Got it all working - and, was able to use Linux machine to access the Huawei phone to migrate the photo of the two machines over.  Had been dependent on Windows to do this, but determined a trick to let the Linux box access the ftp app on the phone.  Also, using updated Firefox on Linux (required updating Gtk and a bunch of other stuff) to write this.  This is first posting to this website that has not used *any* Microsoft Windows technology.  Getting the image off the phone, editing it (used GNU Gimp+gThumb image viewer) and posting here was a new process, but it seems to have worked.  Python + Linux is sort of magical - like that xkcd cartoon.  It's quite liberating.  I am really interested in how it was possible to build the Moire-pattern simulation .jpg image directly.

[Nov. 16, 2017] - Spent some time learning about video stuff on Linux (Mplayer, ffplay, ffmpeg, etc.) and built a home-brewed home-theatre system and a controller using the hacked iPad and a Linux laptop (an Acer Travelmate).  It is a cool hack, as it lets me manage and watch video playlists.   Have also returned to the Macbook, Tensorflow and Python.  Back to learning Python and the Tensorflow framework.  (This may take a while... ).  I have the idea to express the market picture as a, well - PICTURE.  I want to try to use image-processing AI technology to evaluate the image for any possible embedded informational edge.  Quite a different approach.  If you look at the "Code" section, you will see a little Python program that creates simulated Moire patterns. I show a pattern-image, and the Python code that created it.   I am running Python 2.7.10  on a Macbook, under Yosemite (OsX 10.10.5), an old, but stable MacOS version.  At least mine seems to be.  [Update] Later in the afternoon, after a sleep.. :) .. I put Python on the WIndows box a while back, but never tried much.  Just ran the little "" image-generator program (see "Code" section), and it worked on the Windows box also - scipy and numpy were successfully found on the fly, and some Microsoft image-viewing utility for looking at faxes is used to render the image immediately.  Very Cool.  I just used scp to copy the prgm from the Mac to the Windows box, started Python and did an execfile.  Running Python 2.7.10 on the Mac and 2.7.12 on the Windows box.  Python is a well thought-out language and environment.  It lets you actually do things, instead of just drawing dreams on whiteboards.

[Nov. 5, 2017] - Couple of interesting things: Got the GAUSS routines (for matrix math, eg. multipy and inversion, etc.) running nicely on the iPad 1 now.  They are not super-fast, but the numbers are right.  I now have "container" based intercompatibility among Windows, iPad-iOS, Linux and Android using the DOS-emulators..  I can run the *same* code on all these platforms, and do consistant math - so I can multiply and invert tensors and get NN output in (GO / NoGO / Don'tKnow) trinary logic, hopefully.  I have a couple of different flavours of Linux - Fedora9 and CentOS. The  DOSemu stuff on CentOS would not work on "term" only (ie. without Xwindows). After research, learned had to install the S-lang development lib and the GMP development stuff (ie. basically just header-files so the DOSemu compile would run to completion).  Got it to work.  Now, both CentOS and Fedora boxes can run DOSemu without running X, so I can log in remotely, pass data in with a simple "scp" command, and so run on my own private "cloud" if I need to.  Another checkmark in a long list of tickboxes...  

The other track is the video stuff.  I compiled "ffmpeg" from source on the CentOS box, and can run videos there now, using ffplay instead of mplayer/gmplayer, which reports that my box is too slow.  (Comical. Using "ffplay", I can render any .flv or .mp4 video smoothly now on the CentOS box in full-screen, except they render a tad fast - there is minor "chimpmunking" of the audio stream.  Anyone know why???)  Same videos, with same Mplayer source, from same tarball running on Fedora9 Linux works flawlessly - good enough to operate as a "video jukebox" that I configured for fun this weekend, and patched thru from a Linux laptop to the big-screen TV.    The idea is that I could ffmpeg market and news data-info into summary images - process the math-rendering of the images in real-time using an AI-built parameter tensor (ie. transform them and then do the matrix math), and then have an indicator calculator that will give me a Go/NoGo/Don'tKnow output, which will be my edge.

It is cool just getting the ffplay stuff to work right.  The "ffplay" is a light-weight (well, sort of light weight, but not really) video player, that runs I think using SDL (Simple Direct Media Layer) stuff, into Xwindows, and is quite efficient, compared to "Mplayer", which grinds and is choppy and just too slow.  Mplayer just cannot render hi-res videos in real-time without delay, choppy-rendering, and serious fram-dropping, whereas "ffplay" can give a smooth, full-screen video+audio output experience on the Linux boxes.    I think sound+vision is the key to the future in many product areas - even with market-calling AI's.

[Nov. 1st, 2017] - Got the DOSPAD-Gsys stuff working correctly, and put a Fortran 5.1 compiler testbed on the iPad 1.  I hacked this all together last year, and got it working on an iPad 2, but the Apple Xcode compiler has various restrictions and time-fail code build into it, the test app on the iPad-1 would not work at all, and the DOSPAD version on the iPad 2 failed to operate after a short time, as per the restrictive practices of Apple Corporation product design.  I was able to get a version called  "DOSPAD-Gsys" -  an implementation of the DOSbox code - running, by various methods on the iPad 1.  After some effort, I have been able to get the original IBM/Microsoft Fortran 5.1 compiler, running under this DOSPAD-Gsys app, which of course, requires a jail-broken iPad.   It produces results in both single and double-precision, which match results from Windows "Cmd" shell, and Linux "DOSemu" emulator.   This means I can use Fortran matrix-math routines to evaluate tensors produced by any NN-AI, so data-evaluation can be carried out on the fly, based on real-time data harvested from the Internet, using Lynx, which I also have running on the iPad 1.  CLick on the "DOSbox - on iPad" menu-bar option to read details.

[Oct. 29, 2017] - The post-training evaluation of the Xerion neural-network based market forecasting tool demonstrates a co-efficient of accuracy of around 0.24, roughly 25 percent.

  In thinking about this results - which seems to be consistent over time - I think I have a picture now of what is happening.  Basically, I have confirmed the current approach does not work, but why 25%?   If it was pure random, I expected to get co-efficient of accuracy somewhere around 33%.  But my understanding of the probabilities here was wrong.  My working hypothesis is that what I am dealing with is two sequential random events.  First, the dataset is created, and examined, and we train to that.  Then, the trained network is used to predict the outcome 5 days ahead.  (It is five, because the closing prices of the current day are not known.  On day-t, we are only using data up to yesterday, so it is five days ahead of yesterday, or 4 days ahead of today.).   What I believe is happening is simply this:  In selecting data from a group of series, and along 5 day time vector, I am basically sampling randomness.  I am looking to see if I have an up or a down trending market.  If I am dealing with pure randomness, then I have a 50% chance of even making a correct decision here. (Eg. I may see lots of UPs, but the true-trend is actually DOWN).   It's basically a coin-toss whether I have grabbed the useful, predictive data or not.  Then, in using the data to try to predict a future event, if I am dealing with pure randomness, I have a 50% chance of making a correct up versus down prediction. (I have to have a +1 or -1 to trip the counter that calculates the co-efficient of accuracy).  So again, if we really are dealing with pure randomness, and I get a trend signal either up or down, I still have to interpret that signal, and make a choice of UP or DOWN, which if the predictor is just flipping a coin to decide, implies a probability of 50/50.  So the 25% co-efficient of accuracy is just what would be expected, if the driving mechanism is one of sampling from a purely random phenomenon, ie. 0.5 x 0.5 = 0.25.

So, the 0.24 resulting co-efficient of accuracy that I am consistantly getting, seems to be a pretty good indicator suggesting that I am sampling data that is in a domain charactized by a purely random, unbiased process.   Basically, in schematic:  random data in => 50% chance of getting an up or down signal correctly => run signal thru the prediction network => outputs a signal of +1, 0 or -1.   But I am only counting as "Accurate", events were I get a correctly predicted UP or DOWN signal.  So again, if the process is purely random, I am right half the time = prob. of 50%.   So, the probablilty of getting a correct hit is: .5 times .5 = .25, which is very close to the experimental results I am seeing.  

If any live human being is looking at this, I would be interested in comments.  (I fear almost all hits to this site are from spiders and robots).  I had expected to get accurate hits, if all was pure random, at around .33, but the number is consistantly .23 to .24, and in thinking about why, I believe I have hit on the reason - ie. there are *two* sequential events, each with a 50% chance of being right, that are at play here, hence the number of accurate hits the network generates is going to orbit around 25%.

Two Facts:  1)  Every successful trade requires two successful decisions.  You have to buy (or sell-short) at the right time, and then sell (or buy-cover) at the right time.  Just on that simple observation, one can quickly realize that the edge-less probablility of being successful, drops to 25%.  That means, without some form of what is now called "advantage play", you are pretty much assured of losses, if you are an active trader.  It also explains the attraction of the old "buy and hold" approach, where most cash gain is extracted as only dividends or interest payments. In a long-term, real-time experiment, involving two real portfolios, my non-traded portfolio has done significantly better than my actively traded portfolio, despite the actively traded portfolio doing not too badly, considering the market environment of the last 14 years.

2) One of the pitfalls that active traders fall into, is the folly of "win maximization" behaviour, versus "gain maximization" behaviour.  Bad trading will be characterized by a long series of small wins, followed by one or two big losses, versus a long series of small losses, followed by a big win.  It is the later approach that creates gain.  Our own, human neuro-biology defeats us, as detailed behavioural studies show that most commodity traders engage in win-maximizing behaviour, and virtually *all* consistantly lose money over time.

The advantage of automating the trading and investment processes flows from these two key facts described above.  The idea of using the NN is right.  It is just my current implementation that is not yet correct.   I need to find the true signal, and then, I need to determine how best to use that signal.

[Oct. 27, 2017] - Maybe all Copyright Law is bogus.  Read this piece, tell me what you think:

What if an AI creates "artwork"?  Who owns the copyright?

[Oct. 26, 2017] - Never assume anything.  Always check.  I thought my Linux CentOS Firefox was running HTML5 WebM by default, which it is, but the IBM Cloud Ustream product falls back to using Flash - which of course works flawlessly.    I disabled the Flash plugin, and watched as all the video stuff stopped working.  <Sigh...>   The mish-mash of video standards and the security issues they present is one of great failure points of the modern internet.  Quicktime on the iPad worked (and still works) very well, as it is integrated with the Safari product - which is now unable to render the Yahoo stock quotes page.  Modern browser technology is entirely focused on delivering advert interuptions, and really little else.  Tools like Skype are no longer reliable or trustable, and even simple products like spreadsheet technology have been corrupted with an idiot blizzard of incompatable variants.  (Got messed over with a .xlsx file recently.  Annoying diddling designed to force another unnecessary upgrade cycle on commercial and academic users, who just want to be able to share data tables.)  And yesterday, my Samsung Tab-A wants to destroy my APL apps by dropping a bucked of "Nougat 7.1" on me.  The "continuous deployment" approach of constantly changing software means constant and purposeful instability of operation.   This is starting to look like a giant industry-sanctioned scam.   It's perhaps time it ended.

[Oct. 25, 2017] - Updated the "Firefox+Video How-to" section, with explicit instructions on how to use the Adobe "Primetime Content Decryption Module" to render H.264 (basically, .mp4 video files) within Firefox browser, in my case, a Firefox-47 version on an older WinXP/SP3 box.  The "Primetime..." thing is for DRM nastiness, which we avoid.  But it also works for non-DRM videos, and is a method that lets Firefox render .mp4 videos.  SInce Youtube has dropped all Adobe Flash support, rendering .mp4 files is now a critical requirement for any web browser.  Newer verions of Firefox monitor and transmit browser activity, using various methods, and all support for "plugins" as well as operation on Windows XP/SP3 will be dropped or disabled.  We've been experimenting with K-meleon and other browsers, and we hope to move entirely to Linux soon, as that is where the AI stuff is located.  With WINE, we find critical Windows programs (our APL stuff, Xerion NN,  TSM and Lynx data-harvesters, for example), can operate satifactorally on LInux.  But it has been useful to bring Youtube back online for some of the older XP-based equipment, and explicit details are provided in the Firefox+Video HowTo section.

[Oct. 21, 2017] - Just an update on the video stuff...  The version of Mplayer/GMplayer I compiled from source does not list an audio option for PulseAudio, and I found the x11/xv option works best for video, in the gui version (GMplayer).  I found when using SDL for video rendering, I would get slight droppouts in the .MP4 files (sound and image video file) as they played.  But using x11 and the ALSA option for audio (Advanced Linux Sound Architecture) driver options (selected when running GMplayer - the on-screen gui version of Mplayer), I do not get any dropouts or spurious pauses in the rendered video.

[Oct. 12, 2017] - Completely "back to the drawing board" on the AI work.  Since AI's do image classification well, thought I would re-frame the basic question into something along that line of inquiry.  Plan: take a market "picture", (maybe still using boolean jumps), and then assemble longer time-series, combined with fundamental data (div. rate, EPS, book value, trend-estimate, turtle-N, Williams MFI, etc...), (all data that TSM calc's now), and then look much further ahead (maybe 6 to 18 months), and take price picture to calc a ROR (rate of return).  Then, use a backpropagating neural network with gradient-descent training to classify the various market pictures based on 6, 12 and 18 month actual outcomes.  I should also include better interest-rate information - ie. short, medium and long rates, across both investment grade and speculative (ie. junk) bonds.   Some info on central bank balance sheets would also be useful.  What is funny, is how much better I do with a longer time horizon.  My personal portfolio is thumping along fine, and the ones I manage and monitor, (and basically don't trade), are just hammering along and also throwing off cash.  The 5-day ahead NN-AI experiment has demonstrated a co-efficient of accuracy that is stable, holding around 24%, basically showing a completely random result.  The recent past does not give us any hint of what the near-future will be.  Confirms work I did for a Treasury Ministry, using a DECsystem 20/20, back in the 1980's.   Science sort of works.  And maybe finance and econometrics is almost science.  Almost.

I am also looking more at image stuff.  I got Firefox 34 running on my older Fedora Linux platforms (they are stable as turntables on granite blocks), and have built a reasonably current version of Mplayer, version 1.2.1, (which uses ffmpeg and ffplay).   Had to fiddle some of the c code in ffmpeg subdir libavformat, but then got a clean compile.  If you are compiling Mplayer for Linux, make sure to do your ./configure with the  --enable-gui option, to get the graphic user interface (gui), so you can run it from a Gnome (or other) desktop.  You also have to setup the "skin" (default is Blue), for the gui to work.  You have to put all the "skins" stuff from ../Blue subdir, you get by untaring Blue-1.12.tar, into /usr/local/share/mplayer/skins/default so the gui will work.  This works on my custom-built Fedora kernel (circa Fedora-9), and on CentOS 6.6, Linux Kernel 2.6.32-504.el6.i686 running GNOME 2.28.2. In fact, it works so well, that I am surprised at the awesome quality I am getting on these older platforms.  Check out the 4K example video below.

And get this.  Running Mplayer gives me a message saying "Your System is too SLOW to play this!" on both my old ACER laptop (running the hacked Fedora-9 Linux) , which has a 2.0 Ghz Intel Centrino-Duo processor, and my CentOS 6.6 running a 2.4 Ghz Pentium-4, with 2 Gb memory.   This "Too SLOW.." message is not accurate.  Using the command "mplayer -vo sdl" to run Mplayer with the video output set to sdl (Simple Directmedia Layer), it runs full-screen, full-motion hi-res video perfectly, and even has sound, despite my Fedora version using an early PulseAudio implementation that is sometimes problematic.  Note: Here is the help/troubleshoot FAQ for PulseAudio:  

With this Mplayer, I can view all video formats, if I have downloaded the file. (You can even fire up Livna, and get the thingy for dee-v D's, if you want). 

Just for the purists, and in the interests of full-disclosure: The ACER laptop is a TM6460-6572, and the processor is a 32-bit Intel Core 2 Duo Processer T7200 (2.0 Ghz, 667 Mhz FSB, 4 MB L2 cache).  This older platform has 2 Gb DDR2 memory.  But the Mplayer can play a full-screen wonderful fluid-motion 4K video of this amazing northern aurora I downloaded from Youtube using the iPad.  The hi-res .mp4 aurora video is from Ron Murry Photography. Here is the link to it on Youtube.  It is a great test video to confirm your video software is rendering 4k video nicely.

You will want to download this file as an .mp4 file, of course.  I use the hacked iPad to do this, since I regularly get buffering interuptions using Youtube or any other streaming technology.  All non-live video should just be downloaded and localized.   Streaming a static file is silly, but of course, the money made by Google and Netflix is not, and given the DMCA, a broken, low-quality solution is better, I suppose, than no internet video at all. 

[Oct. 2, 2017] - Sad news. Condolences to families of Las Vegas victims.  Looks like a terror attack & ISIS has claimed responsibility, but that looks to be a bogus claim...   But is it possible a 64-year old white guy, who had a pilot's license & owned two airplanes, and had a nice little retirement home, could really spend 30 minutes shooting at country music fans from a hotel window, using automatic weapons?  Why do this?  Had he lost all his money and then his girlfriend left him, and he fell into complete clinical insanity?  Is this what the future will continue to look like? 

There is a 1969 book called "Stand on Zanzibar", by John Brunner, and I think it won the Hugo.  It was set around now, (2010, actually) and describes an overcrowded, competitive world, where people fear "muckers" (not muggers) - as in sane folks who suddenly run "amok", and start attacking and killing those around them, for no apparent reason.

What I remember also, is the supercomputer in the story, which is basically a big AI, called "Shalmanezer", if I remember correctly.  Part of the plot involves a big trans-national company basically purchasing a small West-African country (Beninia? - ie. based on Benin), and getting permission to do this, on the promise that it would completely run the small nation's economy and make everyone reasonably prosperous - but this would involve programming and managing every economic detail, right down to the allowance and pocket change each child would have.  I don't think even Shalmanezer would be able to do it, but some of the other predictions in this dystopian novel seem to have come true.  We now have well north of 7 billion people on this planet (as the novel predicted), and there do seem to be bio-safeguards built into living systems to self-correct hyper-crowding.  I was more expecting rapid fall-offs in fecundity rates, and not so much a rising tide of mass-killings.   Brunner's book, IIRC, was more disturbing than Orwell's "1984", which described a future so awful it  was difficult to take seriously.  But Brunner's book had this demographic inevitability that made me quite uncomfortable.  We *cannot* continue to increase human population geometrically, and remain on this small wet sphere, without we all at some point reach a dramatic transitional event - either economic or ecologic, or perhaps a combination of both.

[Sept. 21, 2017] - Excellent harvest this fall, second of what looks to be three harvests of hay and grass this year, bound up in rolls, picked up and stored by days end (see first picture).  Maybe the "Physiocrats" (cf. Francois Quesnay), and their "Tableau Economique" were not quite so quaint and silly after all...   (See the "Economics 2017" section for a quick explanation.) 

Federal reserve announces offically that it will begin trying to reducing its bloated 4 trillion dollar "balance sheet".   As they push the bonds they bought back into the marketplace, they will drive *down* bond prices, and force long rates up.  There is also likely to be another administered short rate rise in the US before year end, and at least three in 2018.  This will prevent economic collapse, but will also likely ride us down the other side of this runaway bull market that has been in operation. 

Only bank-stocks and other spread-driven financial-service providers will benefit.  Most real operations will face higher costs, rising debt-service rates, and rising inflation (my model correlates inflation with interest rate rises, as both are seen as a cost by those who generate economic surplus.  We know rising rates are only dis-inflationary when they rise high enough to cause consumers to defer consumption, and switch to investment. Rates have to go to 15 to 20% to cause that switch.  If rates stay in the middle-zone, (3 to 9%) they just get passed forward as a rising cost of business).  Francois Quesnay was not stupid, and he was not a fool.

[Sept. 16, 2017] - Designing another approach, continuing to evaluate existing network. New approach involves a "classifier" network, got the idea from the CT-scan approach.  Forecasting future values is perhaps not best idea.  Perhaps, I just trinary classify the current most-recent-data-vector of series under inspection as: 1 = positive trending, 2 = stationary, or 3) negative trending, and ditch the idea of making any estimates of expected future values.  This could provide better and more actionable information, in much the same way image classification does.  Simple algo:  Stay long, if pos. trend, execute well understood mean-reversion stat-arb strategies if 2, ie. mean-reversion seems to be happening, or 3) either exit the long and/or consider taking a short position (or at least pay for a couple of puts perhaps).   This is what I was doing years back with rescaled-range analysis (Hurst Exponents), which my TSM database utility can already calculate (along with Turtle N-values, Williams Market Facilitation Index (MFI), etc.   I can produce a blizzard of stats - what I need to know is if I should take the bloody trade or not = ie. some sort of probability estimate, so I can make this game have a positive expected payoff.  it may be the whole NN-AI approach should be crafted to just to estimate something simple, like the "probability of success" for a long postion in target, where success is defined as a positive future outcome that exceeds the risk-free interest rate at a given future time point, or something like that.  Looking at the success of the CT-scan lung-cancer image classify NN's (and then getting a CT-scan of my noisy lungs) has really been an eye-opener.

[Sept. 15, 2017] - NN AI technology is very good at image processing, and apparently, NN techniques are being actively used to detect tumor growth in lungs by inspecting CT-scan tomographic image-sets.  As of two days ago, a company, Matrix Analytics, has indicated it is beginning validation trials.  This is of interest to me, as I had a CT scan last week, and am still awaiting results.  It is one thing to experiment with this technology, and another to be at the pointy edge of it, and having it rain down on oneself as low-intensity (but high-energy) x-rays from a big, rotating scanner, with one's bloodstream full of radio-opaque iodine compound that is causing all your fingers and toes to feel like they are being dipped in hot water.   The tomographic image resolution is so much better with the radio-opaque iodine compound, that one can certainly see why it is used.  But you can also feel like you have wet your pants (which I did not, thank heavens...).   Apparently, NN AI methods are working *very* well at reading and interpreting the CT-scan images, so much so that the technology is rapidly being commercialized.

[Sept. 12, 2017 - FD: The two most critical drivers that I did not include in the NN input suite, and that I now know are important to my NN's predictive ability are: 1) exchange-rate/ currency valuations, especially for the currency of the target, and 2) analysts revisions to price targets.  Both of these can be shown to have important impact on market price, and both can be sourced, but the analysts re-pricing stuff is a bit tough to include - although if it is translated to booleans, it should slot in nicely once I can determine the most effective way to do this.  It's no secret that the my target price is the currently most undervalued of the Cdn banks - trading at roughly a 12% to 15% discount to the rest of the group - and it throws off a big dividend that lets me keep the lights on at the farm.  (Hey, buy a farm, if you have some money.  Then make sure you have a source of income, that does *not* come from the farm.  That way, you can keep farming!  I read about a guy who won a lottery.  The newspaper asked him what he would do with the money.  He replied:  "Well, I like living here.  I'm going to keep running my farm, until the money is all gone...").  

People love to whine and complain about banks.  And I have tech-associates who don't want banking-sector clients, because they detest working for banks, for many reasons.  But here is a secret:  Want to make money?  Don't buy shares in companies you like.  Bad idea.  Buy shares in companies that you *hate* - but still do business with, anyway (said the owner of several iPads and a MacBook Pro.)  If you hate a company and their products - yet you still do business with them (think airline, oil-company, auto-company and tech-company stocks, not just bank/insurance co.s), then that company has a lock on something, and will likely not explode in your face like a hand-grenade, (Lehman Bros.) or wither and die, like so many pretty things did during the first dot-com bubble.  On the other hand, I know a guy who bought Carnival Cruise Lines stock - and made serious money - and also took lots of cruises with them, and loved the company - so there are exceptions to the rule.  But I liked Nortel, and even did some work for them.  When they were $120/shr, my analysis suggested a price target of $20 was about right.  But I could never keep a short on for more than an overnite trade, as the thing would just get bid up too easily.   Until it exploded and died, of course.

PS: I agree with the TD analyst, who has a $120 target on CIBC, and I remain in my long position.   (Gotta keep farming!).   You can check out the video of the TD bank analyst at the url below.  FD:  I also personally have TD stock, and run a private portfolio which has a significant position in, and exposure to, TD stock.  Also, other than being a customer (without a yacht), I have *no* connection to, or with TD, or any other Canadian Bank or Canadian public company.  And anything I write or discuss here is for analytic & educational purposes, primarily for assessment of my neural-network artificial intelligence/machine-learning experiments, and is not to be taken as or construed as investment advice. (My lawyer said I should say this.  And it is true, too). You should do your own analysis and make your own decisions, or turn all your funds over to a professional, that you can completely trust.  (Not Bernie Madoff, or any of his family, ok?  Do background checks. Seriously, do this.  When I worked free-lance for a small brokerage firm, any time they were talking with company people about a possible stock flotation, they would *always* do background checks of the company officers, and see if anyone had been a discharged "bankrupt", or had a criminal record.  Scam-puppies are as common as Barnum's "suckers" now.  There are whole litters born every few minutes...).  That is why you must do your own research, and make your own decisions.  It is the only way to really learn, and you *really* have to be careful now.  See, you have to invest.  Even buying gold, and burying it, is an investment of sorts.  (That trade worked very, very well in Germany and Austria, in the 1920's).

Ok, here is link to TD bank-analyst, chatting about Cdn-bank-stock Q3 results...

Send me a note if this link does not work...

[Sept. 7, 2017] - Just discovered "Blender", the codebase and application suite used to produce "Big Buck Bunny", the animated short film used as a test suite for video rendering by many.  Very cool.  Here is a good AI project:  Take all the market data - in real-time, of course - and lash it up like an old breadboarded circuit, to a bunch of 3-D imaging software, giving a high-res, fast picture of how market action is unfolding.  (Hint: Morgan Stanley or somebody in NY did this years ago, but in flat 2D, using APL I seem to recall.  It worked for a while..).  Maybe time to take another run at this idea, as the human brain is very good at shape & space recognition.  Maybe use VR glasses to watch it (and initiate trades), in real time.  Probably already being done, I suspect.

- posted details on how to fix Firefox so you can see my posted .MP4 (H.264+AAC) videos for Bimbo and Betty Boop.  These are great pieces of American history, and deserve to be more widely known.  To hell with those who would obfuscate and deny our shared history.  Seriously, I mean this.  The copyright rules and DRM stuff and aggressive IP foolishness are damaging the US and the world.  Our shared cultural history belongs to all of us.  Fraudsters that want to block access to our knowledge-bases, and tear down our public monuments to fallen figures of history, are fascist criminals of the worst kind. I feel strongly about this.  History - true history, not the radical-leftist fraud-talk that is popular in the left-liberal schools - is important, as it provides a clear map to the future.  We should detest violence, yet also remember that war works.   War is often the painful price of freedom.  It is a high price to pay, but like the price of most valuable things, it is often worth paying.   No-one remains free and safe or has any real security, when bad things are allowed to be done in the name of good ideas, and evil is allowed to flourish.  Remember this, people.

I posted the FIrefox parameter stuff so folks can see how it can be configured to work properly, and show all HTML5 video formats correctly.  The idiotic blizzard of video standards is perhaps the price we have to pay for fast innovation.  It took me *days* to figure out how to make something that was working correctly in 1997, start working again, in 2017.  WTF? is about all I can say.  Also, if you are in one of those seriously non-free places (oh, I am thinking somewhere in the East maybe?), make sure you learn about and use TOR.

WRT to the NN-network project, I have located a good source for some of the additional data I need to fix the poor forecasting ability of the current model.   In fact, what I have discovered looks like it might be quite useful, AI network or no AI network.  Key for me is to keep on, here.  I am scheduled for a CT-scan tomorrow, as per my doctor's suggestion. (My doctor is pretty cool... he is a very large black dude from Africa, who reminds me of Hunter Thompson's attorney in "Fear and Loathing...").   He and I took a look at a quick chest x-ray, and in later consulations with another fellow at the clinic, he scheduled this quick CT imaging for tomorrow. The urgency here is surprising to me.  Lends a certain focus to my AI work efforts.  If I am to get any benefits from this exercise, best if they come sooner, rather than later...   I need a new pickup truck, and they cost $70,000 here.  If I want to have a bit of time to motor around in it, best if I can achieve results in a timely manner, before Mr. D. comes to visit, and knocks his boney fingers on my little front door!   :D

[Sept. 1, 2017] - Max Fleischer was an amazing genius, very far ahead of his time.  His wild, surreal "Talkartoons" were full of both modern, sophisticated jokes, and clever classical references.  How did Fleischer Studios not survive to be bigger than Disney?  In searching for some other technical material, I ran into a reference to "Bimbo, in The Robot", a 1932 Betty Boop (Fleischer Studios) cartoon.  The word "Robot" had only entered the English language 10 years earlier, from Czech writer Karel Čapek's "R.U.R", (which stands for "Rossom's Universal Robots"), a play about "Blade Runner"-style replicants who have taken over the world - but it was written in 1921!  In the "Talkartoon", Bimbo is an inventor who has a two-way "Television" he uses to talk to his girlfriend Betty Boop, and he builds a strength-enhancing Robot so he can win a $5000 prize-fight and get his girl.  What a classic story-line!  As I watched this impressive old American-genius anime, it really hit me: THIS ... this is what we need - not any sort of prediction device - but an active, constantly learning, assistive augmenter, that helps us overcome otherwise impossible-to-succeed scenarios.   Two Fleischer Brothers clips: First is "The Robot", second is Betty Boop closing a show in NYC, and then flying her own airplane to Japan, and doing a show in Tokyo, where she sings in Japanese.  Apparently, Osamu Tezuka, who authored and developed the "Astro Boy" series, (about an atomic powered boy-robot, which was a wildly popular anime TV show in post-war Japan) watched Fleischer Brothers anime in pre-war Japan when he was a young boy, and was very impressed by it. Tezuka created hundreds of anime publications.  I particularly like his "Black Jack" series.


[Aug. 31, 2017] - I might have to defer to Prof. Andrew Ng.  Attempting to forecast actions of banks and bankers, and associated equity values and resulting market prices, by watching their reported numbers, and related economic data series, looks to be not-doable.  The Bank of Canada has some serious series at its disposal, and a *lot* of wise people, as well as significant economic power and authority.  They need not simply forecast, they can alter what occurs by open-market operations.  Yet even they could not come close to simply forecasting 3-month second quarter Cdn GDP change.  The consensus number was just above 3.0 percent.  But Q2 Canadian GDP came in at a robust 4.5% annual change for the April to June 2017 period.   This is a boom-time GDP delta.  I was expecting a lower number - in the 2.5 to 2.9 range (ie. below the GDP delta for first three months of the year.)   So, hell, this just demonstrates what my little Treasury group learned back when we were young pups out of school - even the best of the best, with the most accurate, current data, cannot forecast worth shit.  Let's be clear about this. You cannot forecast the future any better than the null-forecast.  The Null-Forecast is "It will be tomorrow, what it is today."  In other words, the best estimate of the future value of a stationary price series, is the current price today, ceteris paribus.  (Ceteris paribus is economist weasel-talk, which means "everything else being equal", which, of course, it never actually is. )

So, if you are trading, forget about trying to forecast.  You are best to make reactive decisions to observed events, and be agnostic about the future.  Let the market tell you, and don't waste time and effort trying to forecast future values.   Simply position yourself and your holdings for what seems most appropriate at the current time.   I have my trading portfolio, and a similar sized long-term classical investment portfolio, which I deliberately do not trade.  The "not traded" portfolio is absolutely beating my trading portfolio, by a very large percentage now.   It's percent-change sign is nicely positive, and my trading portfolio has a big negative percent change sign.  

The investing game is unique.  It is different from other human actions, in that often, the less you do, the *better* you do.  The "Warren Buffet" portfolio, where the holding period is "forever", does better than almost all actively traded portfolios.  For a keen, clever, hard-working guy like me, this is a very hard lesson to learn.  

As an investor, long-term, I have actually done quite well.  But since the Trump election in the US, I have managed to get almost perfectly wrongly positioned, cashing out and missing the run-up, repositioning at a local market top, and now riding a couple of big positions into the toilet.  It's a  neophyte mistake, making me more angry at myself for this  time-waste exercise on machine learning.  There is a real chance that neural-networks are possibly complete useless here, as they have to be trained on *historical* data, and it is the nexus point of the *right-now*, as it moves forward thru time, that matters.  It is as if a bullet in flight is more like a rocket with fins, than a bullet on a trajectory - a tiny perturbation of the trailing control surface (the fins), can cause a massive redirection of the flight-path.  As such, any examination of the historical actions of the trajectory is basically a complete waste of time, as it has so little bearing on the future "point of impact".

Curiously, the fellow I know who uses older "expert systems" with fuzzy-logic, said pretty much this about neural networks in market contexts.  He thought they were completely useless, and I am thinking he was perhaps right.

Although my trading activity would show multiple runs of successful trades, sometimes 5 or 6 wins in a row, of roughly $1000 each, on balance, I have to admit I would have been much further ahead, if I had simply avoided *all* statistical-arbitrage style trading, and just taken positions and held them, until I found something better.   And I even have a detailed academic paper on this very topic somewhere, which did this formal assessment of long-term commodity *investors*, virtually *all* of whom lost money over time.  There were basically *no* long-term "winners" in the study, yet the punters kept participating, often for years.  Almost all were wealthy business-owners, with significant surplus funds available for their "investing" (trading) activity.  They all lost money, but kept playing.  The study showed they engaged in "win-maximizing" behaviour, maximizing the number of "wins" they could have - not maximizing their possible profit.  They played, simple because of the neural-technical characteristics associated with "random re-enforcement", a powerful psychological training technique.

In Grade 8, when other kids were playing outside, I was repeating B. F. Skinner's experiments with rats in a "Skinner Box", programming them to have a "conditioned response".  (The bell rings, and the rat would jump over to the other side of the box, without even thinking about it. )   I realize now, the market does the same things to trader/investors.  It teaches them to hold bad postions, and rewards them for making unwise, short-term trades.  I am the rat in the box, and as the prices change, and I jump in and out.  Not good.

If you want to be a very successful investor, you have to be the guy ringing the bell, not the poor programmed rat doing the jumping.  Zen-like enlightenment, satori desu.  Prajna.  I have been the rat, and my entire approach using the boolean neural networks here is not just unsuccessful, it looks to be plain wrong. 

It is like the "caterpillar --> butterfly" problem.  You have to watch the process unfold long enough to see the caterpillar transform into the butterfly.  And your investment horizon has to be long enough to capture that entire transformation.  Or, you must trade at nano-second intervals, and capture rapid (but very predictable) small, very short term changes.  Try to mess around in the middle-zone, and you will just be the guy supplying the capital to the game.  Perhaps I will fly to the Zimbabwe for the Harare International Carnival, and watch Zodwa Wabantu do a modern version of Josephine Baker's "Danse Sauvage".  Those who are offended by Ms. Wabantu's radical sexualized dance style, should watch Ms. Baker's 1927 Paris performance, which was recorded at the Folies Bergère.  Fashion, art, dance and fame are like the markets.  The more they change, the more they remain the same.

[Aug. 23, 2017] - More research on deception/fraud in banking..  Learning about how bad the threat scenarios created by modern malware really are.  As an author of Android apps, which I have available in the Google Play Store, I was particularly surprised by the sophistication of the "BankBot" malware that made it into the Android-based Google Play Store, and has been used to enable fraudulent bank transactions.  

It works as follows: A user downloads a bogus app (a "flashlight" app, or an app that purports to show funny videos or something like that), and once this seemingly benign app loads, it then downloads a malware APK (application package) and side-loads this excrement-pile of toxic code to overlay fake screens when you make legitimate access to your online bank accounts.   The app watches for your access to the bank's website, and then overlays a fake screen to capture your login credentials, which are then transmitted to the criminal's device.  This defeats two-factor identification, and the "BankBot" malware even blocks the bank's SMS confirmational message, and sends it to the criminal's device, so that SMS-based confirmation of transaction can be faked.  In this way, the "BankBot" code is using spycraft-style data exfiltration - your SMS messages are read, and re-sent to the criminal, so he can craft a successful fraudulent reply, as he is creating a cash-transfer transaction that will empty your bank account.   Finally, the "BankBot" app will lock your device to prevent access, so that your bank cannot even SMS or phone you.  It will present a fake notification screen saying it is updating, and remain unusable.   To remedy this, users will reset their device to factory settings, and thereby typically destroy all forensic evidence of the "BankBot" code.

Further details, including MD5 hash-codes of the bogus Android APK's, are here, on a SecurityIntelligence website, which is operated by IBM.  This information is from July 27th, so we can expect all these examples of "BankBot" apps will have been removed.  What is interesting here, is the level of sophistication that this malware shows, as well as the note that it does not operate at all, if the targeted user is seen to be geo-located in a CIS country.  Note that this does not prove the authors are Russian.  They could be Polish, Romanian, or British, and cleverly seek to vector attention away from themselves.

[Aug. 22, 2017] - I get hit by several attempts per week to drop payloads of badcode onto my little cluster of toybox machines.  I actually had someone get thru to discussions with me using some fraud via LinkedIn.  (I am reminded of a trick by M....d operatives to get close to an Iraqi n-scientist.  They had a pretty girl in a car pull up to where he was waiting for his ride, and pretend to be interested in talking to him.  The target told his wife about it in the evening, and she immediately said "You idiot!  That was the M....d.  No woman would want to talk to you in the street like that!"  Of course, the wife was spot-on correct.

Once you even just stick your nose even a tiny bit into the darkworld, there are all manner of impressive honeytraps.  I remember once in Osaka, a pretty J-girl tried to vector me to some badplace - she just came up and started chatting me up in front of a shop window.  I even spotted her handlers across the street.  She grabbed my arm, tried to walk me away from where I was headed.  I literally had to physically remove her grip, and sprint into my client's office. 

Once you start really looking at how banking was done in the past, and is now done today, there is a *very* large rabbit-hole, once the topic of deception is engaged.   My, my, but it is large.  From the Renaissance Medici Bank, I had to check out the modern scam of the "Bank Medici", an Austrian operation which lost over $2 billion (US) in the early 2000's via the Madoff fraud.  The success of the BitCoin block-chain approach, with the public distributed-ledger concept is obvious - secrecy is pretty much always used to hide a toxic reality.   Dig further in to bank secrecy and associated modern issues, and there truly be dragons, for you sail right off the map of truth, into the blackhat world of hyperfraud.  Madoff raised money successfully, because he paid his money-raisers big fat fees, and was able to troll in cash because he promoted the lie that he had built a "Dutch Book" on the Nasdaq, using some options trickery that operated on the Index ETF's.  This was of course discovered when Harry Markopolos tried to replicate what Madoff said he was doing, and confirmed by detailed inspection, that there were insufficient trades in SPX derivatives to account for the volumes Madoff was claiming.   Markopolos tried to report this to the New York SEC, and was rebuffed, as Madoff had cultural connections with the New York regulators which appear to have offered him protection.   Nothing really changes.  I recall Markopolos wrote a 40 or 50 page paper on why Madoff's strategy could not possibly work, and submitted it to Massachusetts regulators.   What Madoff did so successfully, was to harvest funds from rich folks, ponzi-scheme style.   His large returns were paid from the funds harvested from new clients, in classic ponzi-scheme fashion.   There is a risk in any attractive high-yield investment, that you are simply being repaid with your own money, and that your principal has already been used for other purposes.  Has it been lost?  You never really know until you attempt to obtain its return.  This focus on return *of* capital, as opposed to return *on* capital, is of critical importance.   The history of the investing shows that most investments consume capital, and only rarely add to it.

In the modern world, there are new risks.  I read how earlier this month, a man of many hats (black, grey and white?). was arrested by the FBI as he attempted to return to the UK.  He is Marcus Hutchins, famous for being the white-hat (good guy) hacker that was able to shut down the "WannaCry" attack that targeted National Health Service computers in the UK.  He apparently assisted the GCHQ (the Brits version of the USA's NSA), in vectoring the WannaCry attack into a DNS sinkhole.  Details here:

What is interesting for me, is that his arrest is related to the Kronos malware.  He is alleged to have created part of it, and been involved in its deployment.  Kronos is a very impressive trojan.   It operated between 2014 and 2015, and typically installed via spam or bogus downloads.  It could lift banking information from an infected machine, and drain an online bank account successfully, if it could find any banking credentials.  Copies of the code were marketed on the darkweb.  Hutchins is alleged to have authored part of the code, but analysis of the malware in detail, suggests this is unlikely.  Not impossible, just unlikely.

And even more interesting, is the fact that GCHQ apparently knew the FBI was going to arrest him, and that he was arrested at the end of the DEF CON hacker convention in Las Vegas, as he was boarding his return flight to the UK. 

The banks assert that online banking is secure.  The Kronos trojan shows that this was absolutely *not* the case, and the FBI is probably under extreme pressure to bring a prosecution.   It would appear that deception, mis-direction and outright lies are still the stock in trade for financiers.   In Canada, we do a *very* good job at keeping our banks honest, and well capitalized.  (Hey Jeremy!  Nice work!  Don't stop. Carry on!).  But in the rest of the big bad world, this is not always the case.

For me, hacking is all about *increasing* and *enhancing* security, as unless you have *COMPLETE* access to *ALL* internals of your machine, you are simply relying upon what others have promised you - you have no real knowledge at all about whether your systems are secure and trustable, at any level.  It is what the courts call "hear-say evidence", and it is not even allowed to be entered as evidence, is it?   Same with the modern black-box products that Microsoft and Apple ask us to use.  We are fools if we take them at their word, and simply trust the assertions of others.  Only by having fully *open-source*, is there any hope of creating trustable, secure software products.

Three things come out of today's analysis:  1)  Block-chain, with it's open, visible-to-all distributed ledger (no "libro segrato"  nonsense) is probably the future of finance.   2) Hacking your own computer to obtain, at the very least, full "root access" - ie. complete access to all code that controls your financial-critical software - is necessary, in order to have any security whatsoever.   3) Secrecy, deception and obfuscation are the hallmarks of corruption, fraud and criminality.   Publically funded agencies - like the NSA, the GCHQ and our own CRE should be tasked with actively assisting open-source developers in *hardening* modern operating systems - not in exploiting discovered weakness to dropkick tracking and monitoring payloads onto everyone.  

Modern technology is at a turning point.  Since the 1920's, technology has made our world better.  The process accelerated after the Second World War, and massive economic improvements have resulted in extreme cost reductions for advanced processing ability.  But AI - the next wave of innovation -  will not be positive or beneficial, if we do not deal with the deception+fraud problem.   AI's can be misdirected by bad data. Weaponized AI may have extreme negative consequences.   It won't be "killer robots", it is more likely to be like Orwell's "1984", mass wire-tapping of all citizen information, and wars fought not with guns, but with poisoned food and water, computer-controlled bio-weapons, and social breakdown engineered by design.

We are at a point where the technology can and is being used to monitor, track, hurt, degrade and impoverish, almost as often as it is used to assist, benefit and improve the lives of people.   What history shows is that technology is always weaponized.  It is foolish to expect this process to be limited by legal statues.

There is one chance here:  Sunlight.  We can and should open-source all code and modern AI methods, so that everyone - and especially those who are technically skilled - can act as auditors and analysts, and really see what is being done.  This causes the degrees of gain from fraud, obfuscation and deception to be held down at a low level.   

This will not just keep us safer, it will also improve the operation of our markets and our financial institutions, so that the new technologies can be used to confer positive life benefits rather than higher survival costs.

For the technically curious, here is a link to a detailed assessment from Malwarebytes Labs, of the internals of the Kronos trojan.  The conclusion they reach, is that this was a product of a sophisticated development effort by a skilled software team, not the experimental work of a young lad with AS.  Either way, it looks to be impressive work, despite its criminal intent.  I've looked at this link safely with a current Firefox browser, but I do not download anything without I triple-verify the hash-code of the file.  Another good idea, is to only use email software that only processes messages as plaintext, or HTML, and cannot even be set to execute code.

If you don't read any other articles on malware or security issues this year, please read this one from malwarebytes.  I was way down the hole when I found this, and it is good.  The Kronos trogan is still in active use, still being distributed, still in operation as of August 18th, 2017.  It is now typically used to drop additional virus payloads on machines.  I had not realized the level of sophistication this stuff had reached.  The modern machine (Windows 7, 8, and 10, Firefox xx.x ) has reached a level of active integration where all sorts of automatic code is being run for everything you do.  It is just bad, bad design, from the point of view of the user who wants to have local control.   Run a modern O/S, and a modern web-browser, and you are already pwned, as you are then not even in control of your own technical environment.   

[Aug. 21, 2017] - Read with interest the details of letter sent to UN, by a group of influential tech types, asking for a ban on weaponized AI (CNBC reports: "Elon Musk joins more than 100 tech bosses calling for ban on killer robots"...  gotta love the MSM...).  I am pretty sure weaponized AI is already deployed.  Drones are already being used to launch missle attacks that have killed hundreds - possibly thousands, according to some estimates. What happens if the radio link to the Killer Drone platform is lost?  The drone operates autonomously, of couse.  So "killer robots" are already with us right now, and are being used.  Asking the UN to ban weaponized AI is like asking men to only have intimate relations with their wives.  Perhaps a noble idea, but I can't see it being effectively operationalized.

WRT the project, I am in the weeds on the data issue.  The data I need is not available to me.  And I've been studying in some detail exactly how banks operate internally - looking at both history and current data.  What an amazing learning curve.   All really useful data is either unavailable, or highly obfuscated (best example I ever saw, the financial statements from Enron Corporation, before its collapse, with the quarterly filings of Lehman Bros. coming in a close second... ).   

And if one examines history, it gets even better - the famous Medici Banking empire (1370's to 1499) explicitly kept a second (accurate) set of books, known as the "libro segreto" (literally: "book that is secret"), which kept the true (as opposed to the public) records of partnership details on various ventures, debits, credits, deposits, and true value and accounts of ventures that the bank owned a direct interest in - silk making, shipping, wool processing, the alum trade, etc.  (Alum was a critical industrial input used to de-grease wool, so fine wool textiles could be made.  One needs to realize the textile trade was the 15th century equivalent of the auto industry.)  

The Medici bank often ended up owning a big part, or even all, of a business, as it was often given to them in lieu of loan re-payment.  I am reminded of Laidlaw Inc., a failed transportation business in Canada, which ended up being owned and run by its bankers, once Laidlaw's shares fell to zero, and it defaulted on its bank loans.

Basically, what history and current events are telling me, (rubbing my nose in, really), is that public data - even today (perhaps *especially* today) -  does *not* provide the best or accurate picture of what is really happening.  So any exercise to use machine intelligence to determine what is really happening and then make a forecast, is unlikely to be successful, if it is not sourced with accurate data that drives the process.  If the data one is using is not only perturbed by randomness, but also generated *by design* to paint a false or misleading image of what the true financial and commercial state of events really is, then one's effort is better directed at obtaining the "libro segreto" information, rather than messing about with deceptive and obfuscated public material.   This is so obvious, of course, is it not?      The bottom line here, is that perhaps I am just wasting my time on this particular AI exercise.  I need - somehow - to obtain the modern equivalent of the "libro segreto", or I am just programming failure into the forecasting process.

[Aug. 15, 2017] - Revisited design, and revised.  Need a lot more data, but I am pretty sure I can make the NN-AI work *much* better.  I am missing two very key data-streams.  Surprised about how much AI talk/design is focusing on "customer experience management", as opposed to offering anything real.

A recent survey says >85% of companies want to install/acquire/utilize AI to manage customer contacts.  Of course - what org. would say that it does not want an advantage?  But be careful with AI that is focused at manipulating customers "experience".  Most customer/clients already feel so messed-over by the clever tricks coming from modern neural science and behavioural economics, that they are ready to light torches and march in parades. I'm quite serious.

Most AI will not offer *any* benefits at all to customers - it is designed to use their data, and get them to open their wallets further (viz. the awful Windows-10 experience). Most current use of AI by companies borders on toxic. What AI *can* be used for, is to augment and assist folks directly - like night-vision targeting goggles, or HUD (Head-Up Display) technology in fighter aircraft. This type of AI is controlled and used by the *customer/client* directly, and may in fact make life more difficult for dishonest, manipulative commercial and government entities.

For example:  Imagine an AI "databot" that harvests company-specific info from obscure SEC/Edgar filings, and interprets the probability of financial distress for counterparties - sort of a combo real-time shopping advisor/ stock-market investment analyst.  It could *score* a company, review past customer comments and legal case results, and give you a real-time feedback of whether you should do business with the entity.

Direct augmentation of client decision process:  This is where the real payback from AI will come - direct assistance to people's decision process, so you can make better, more profitable decisions as you navigate your day. The last thing you want to rely on is the disinfo that companies and government entities want to download on you. You don't want a bigger filter bubble - you want to use your AI to *escape* the web-of-lies that is your current filter bubble.  Who wants further limits, restrictions and programmed-direction of their analysis/search/decision efforts?  Let's see... ah, no one at all, probably, right?

[Aug. 11, 2017] - I now know what I am missing in the network - possibly a key reason why it was not predicting with much accuracy.  Of course, the dataset is very small, but I am also missing a very key data series, which is probably critical.  It will be a bit of a research project to determine how to get the data, as it is not available on the internet anywhere, as near as I can tell.

[Aug. 9, 2017 - PM] - This website has been an interesting experiment, and has let me stay focused to create ver. 1.0 of the product I want to make.   Need to scale up the dataset, and the network, and re-think how it provides its results.   I will need to spend some money on books and probably a cleaner datastream (a Bloomberg?) or something like that.  There has been some viewing of this site, but not a single lead or question of any kind, via email.  So, it looks like it is just robots and web-spiders crawling the site.  <sigh>   I found a good series of papers, from a conference in 1985:  "Maximum Entropy and Bayesian Methods in Inverse Problems".  Sadly the book is $400.  For a single book.  Blink  Faraday was lucky he worked in a bookstore.  Liebnitz had a library.  Newton had Oxbridge and the mint.  I will check out Univ. of Waterloo library, and see if they have any suggestions.

[Aug. 9, 2017] - Down another rabbit hole... Reviewed Brian Randell's 2013 presentation at the Bletchley Park Museum, on the rebuild of the Colossus I.  Amazing achievement.  As I drilled further into the characteristics of the vaccum-tube powered Colossus, searching for a copy of the Horwood Report (only hard-copies from the Archives at Kew, apparently), learned about Donald Michie, who sadly was killed in a car crash driving from Cambridge to London, 10 years ago.  He and Turing were friends at Bletchley, and Prof. Michie was one of the founders of AI research in the UK.    Michie was working on really interesting ideas, really early on.   Here is a summary from an interview he gave, which describes early efforts:

Prof. Michie's CV is still online, as is his publications list.

His obit was a full page in Nature, exactly 10 year ago, August 2007.   I should like to read several of his articles.  He and Turing really believed machine intelligence was doable.  And apparently, Turing wrote these great papers on probability as it relates to codebreaking, while at Bletchley.  It would be great to read some of this stuff. But it is so difficult to access any publications now - everything is behind paywalls, or only available to those at Universities doing research.  I think of Faraday, who had to work as a servant at the Royal Society, just to have access to a lab.   "Big Science" and "Big Government"....  Blush

Learning about war.  WWI gave us modern political and economic systems, while WWII gave us modern technology. (If I am to have any hope building my "Trading AI",  it will likely have to incorporate a Weiner filter of some kind. (Or should I call it a Weiner-Kolmogorov filter, eh?).  I remember doing the Durbin-Watson statistic calcs for some times-series stuff in economics school.  But the Weiner filter was made into a real device, and attached to gunsites and used to shoot down incoming German V1 rocket-planes (buzz-bombs, they were called).  The Weiner filter made the targeting work much better.   Levinson wrote his paper on Weiner RMS predicition in 1947, just after the war. [Levinson, N. (1947). "The Wiener RMS error criterion in filter design and prediction." J. Math. Phys., v. 25, pp. 261–278. ].  

There is this massive body of knowledge on spectral analysis and probability that has been applied to time series analysis - mostly with the hope of just predicting what the bloody thing will do as it evolves thru time.  Folks who study seismic data and astronomers looking at radio telescope data use the principal of maximum entropy and a bunch of laplace math to attempt to reconstruct a signal that has been badly corrupted with noise.  Thru tracking down Levinson and Durbin, I stumbled upon this book from 1981 - conference procedings from a University of Wyoming conference on maximum entropy deconvolution - but applied to a bunch of different fields.   Great chunks of the book are hidden - but the Google preview lets several articles thru, including the first one, which is a math-heavy (but well written) summary.  The book is called: "Maximum-Entropy and Bayesian Methods in Inverse Problems", and it is a collection of papers.    In some sense, this is what the neural network I've built is trying to do - extract a "signal", and then predict it's future value a few days hence.  And of course, this assumes there really *is* a signal.  What the network is telling me is that there is *not* a signal - it is all noise.  Is this really the truth?  Or do I just have (seriously) incomplete data?  

Do I have to go all the way back to Michie and Turing and try to build Turing's "baby machine" that can learn?   If stock-market price series are random, then successful traders are just lucky.   But the evidence says otherwise.  The "Turtles" (Richard Denis) showed that trading can be taught, like a skill.  The objective was to trend-follow, and pyramid up your position into the trend, while at the same time observing harsh capital protection rules.  I've had some success doing the opposite - playing a statistical arbitrage game, looking for extrema, and betting mean-reversion (as long as the Hurst exponent for the series suggests that the series is either random or mean-reverting).   The beauty of the NN approach, is you let the data tell the network which is the truth - and it may be both or neither, depending on various factors, which maybe the network can "see".   Or maybe, it cannot see, as the evolving process really is random.  (Which I do *not* believe it is.).     I must look deeper into the deconvolution approach using principal of maximum entropy - as selecting for maximum entropy on the input vs the output, seems to be the new, best-practice way of training a neural network.  And cleaning noise from signal has been successfully used for fixing high-noise speech transmissions, and cleaning up digital images, quite successfully.

[Aug. 8, 2017] - Well, it's official:  We go to "T+2" from the old "T+3", as of beginning of September.  Shortening the settlement cycle is overdue, but at least some action is now happening.  About bloody time.  Future comes slowly, on little cat feet.   But I keep seeing Tesla's all over town.  It's interesting, as the Tesla is pretty much the *only* concrete example of positive change I ever actually see.  I think that's why Tesla stock is so crazy valued - just because it has actually been built.  The world has been pumped up like an over-inflated beach-ball with hype, lies, fraud, horror and nonsense.  Talk is so cheap now, that is pretty much has negative value.  I use the internet more than I ever have, but I like it the least I ever have.  Is it all fraud now?   At least the Tesla is real. 

The only other cool tech that is off-the-shelf-available is the Toyota Mirai.  It is hydrogen fuel-cell electric, but we have only a few hydrogen stations in Canada, and the economics of the Mirai are awful.  It seems that unless you have a Steve Jobs, or an Elon Musk, your product ends up being a committee-built compromise with most of the real innovation effectively gated by the lawyers and accountants.  Everyone loves to talk innovation - but what gets effectively delivered is chrome + tracking software so that the product can replicate Madison Avenue + TV Networks of the 1960's.  The two great cycles of innovation were in the 1920's and the 1960's.   Other than small, cheap computers, there has been very little new created since the 1970's.  We are embedded in a growing cluster of derivatives of derivatives and enhancements of upgrades that yield no real benefits to the end-user.

I had this idea that Artificial Intelligence (AI) technology might yield a whole new class of products.   But I also realize that this might not happen.   It might just be weaponized, and used to amplify and enhance the human capacity for control and cruelty.  As Colin James so eloquently put it: "There's only three things worth of living for and that's
Chicks, and Cars and the Third World War...".  The thing about a big war, is that it drives change.  Real change, not fake change.  The First World War re-programmed Europe, destroying all the monarchy-states, and replacing them with democratic political models. These did not work well, as there was no tradition of democracy in Europe.  But WW1 made the changes happen, and re-drew the maps.  And eventually, democracy won out, because it is better for society to fight using ballot-boxes, than in the streets, using clubs, knives, firebombs and guns.

Almost all the practical scientific products we use came out of warfare and conflict.  The slide rule was invented by Napolean's guys to aim their cannon quicker (the guy who gets the math right first, blows up his advisary first), and it was the British in Bletchley Park breaking code, and the US Navy enhancing their aiming calculators, that built the first electronic computers.  The First World War took the automobile from a quaint "horse-less" carriage to a heavy-truck for troop and material transport.  The airplane morphed from a motorized kite built of wires, wood and fabric, into a lethal heavy bomber and a high-powered flying machine gun.  Chemistry advanced quickly to create industrial-level chemical weapons, and radio changed from being sputtering sparks on a ship, to an electronic device powered by vaccum tubes allowing for remote command and control.  The technical jump brought battlefield death-levels into the hundreds of thousands, and the massive improvement in global transport allowed the "Spanish Flu" to migrate rapidly around the world, killing between 50 and 100 million people after the war ended.  It was the First World War that created the modern world, scientifically, economically, and politically.

If AI is to have the impact that some think it will have, is it not also likely to be weaponized, and used to destroy the enemy (and their cities) more effectively?  I have trouble coming up with a scenario where this will *not* happen.

[Aug. 7, 2017] - Folks nowadays have short memories.  The "fake-bad-data" issue I mentioned in previous note offered the LIBOR rate-rigging scandal as an example of deliberately corrupted critical market-data.  I note tonite with interest that Citigroup has agreed to pay $130 million US-dollars to settle a private lawsuit brought by LIBOR-users - fund and investment managers such as City of Baltimore and Yale University.    


Memo to the market-makers:  Please guys, keep your data clean and your noses will follow.  Better for everyone to have accurate data, and cheaper for you to run your business!  Big Grin

[Aug, 4, 2017] - Reviewed a bunch of stuff on the internet re. AI.  Lots of hype and nonsense, but some good stuff also. (A lecture on pathfinding and dithering in games, using vector dot-products and such, and the Djikstra algo and A* and such to do searching was useful.  My world is not 2D, but the concepts generalize...).  Also, watched a Sloan lecture from Stanford by Andrew Ng, one of the big names in AI, very good, interesting and useful, despite it being more for GSB types. URL:

He had some useful insights on the data problem.  Data is the key - and with HPC (high performance computing), lots and lots of data can be given to NN-AI's now, and the HPC approach allows improvements to go beyond the previous thresholds where the supervised learning reached its limit.   I've already spec'ed out a bigger network, more layers, using lots more data.  

Andrew Ng's lecture was very good, but he did not talk about Fintech.  What is not mentioned in Fintech AI apps, is just how much motivation there is for *deliberately* corrupting up the data, to present a market picture that is false.  (Imagine the self-driving car having not just to deal with trucks and people jumping into its path - but actually deal with other cars that have been programmed to *try* to hit you!)  That is one of the factors the market-oriented AI has to deal with.  The trickery played by London-based traders re. the LIBOR rate-rigging, and the fiddling of the gold price fixings are examples of data-tweaking.   There is lots of data - but has it been fiddled to create an advantage for data-generator?  Lots of time the answer to that question is simply "Well, sure, yes, of course. Absolutely!". 

This "facked data" issue, (as in the Irish expression: "Oh, Fack!) is one of the biggest problems AI faces in the real world.  The entire so-called "financial crisis" of 2008-2009 resulted from fiddled appraisal values on residential homes in the USA, and faked income data on the part of the buyers, which let them qualify for massive mortgages they had no hope of ever paying down.  And then the MBS-makers on Wall Street further generated fake-data on the risks their institutions were taking on, by faking up their risk-models to show *vastly* lower risk than was the case.  And this "bad model" problem meant that the fundamental economic data of the banks and other agencies (ie. Fanny Mae and Freddie Mac) were bogus, bad and false.

If your deep-learning AI is trained on this fake data, its results will not just be wrong, they may in fact, be quite pathological.  If the AI's results are relied upon, and enter into an automated action-process, the outcome may be *massively* destructive to economic value.  I'm reminded of an old adage: "To err is human.  But if you want your error to create a major disaster, use a computer." 

The Stock-Broker's "Conduct and Practices Handbook" says you should not collude with other traders to "high-close" your stock, even if you are the market-maker.  But "banging the close" (to uplift the close price to artificial levels) is an obvious strategy, which benefits so many players that it cannot really be prevented.  It is stat-arbitrage that limits this trick, not the market policemen.

Reverse-engineering the other guy's trading algo (by watching when and where he places his trades), and then having your algo trade against his trades is a well understood strategy, and has the added benefit of not being illegal.  Trading algorithm design is probably a better approach to take, rather than trying to predict anything.  I'm probably better to just lash-up some heurisitc response based on the NN's output, rather than trying to say anything about what the world will look like 5 days forward.  This approach is different, and probably requires that there be re-enforcement built into the process.  The network learns and gets smarter and generates profit, as new data is created.  The obvious problem with this, however, is that you get the "nickels in front of the steamroller" problem... Your algo runs fine, until it makes one error which compounds, and you find yourself on the wrong side of a run with a leverage-magnified position that is going wrong at an increasing rate.  (This is what did in the very bright boys of Long Term Capital Management, for example. And many others, to numerous to mention...)  Shocked

Another thing Prof. Ng avoided speaking of, was the use of AI's by government agencies to carry out mass-monitoring of citizen activities to very effectively limit political communication and political activity.  No one seems to be talking much about AI tech in this task of mass data-collecting and content-monitoring.  The AI's watching the communications grid can extract meaning from text and detect patterns in the linkage metadata, and zero in on folks to shutdown for speaking about the need for political change, and the requirements of human freedom.  This just happened in China.  Suppose you simply communicate a statement such as: "The Chinese Communist Party is an anti-democratic, illegal political entity that has usurped control of the Chinese State by force.  It has no mandate from the people to govern China, anymore than a group of criminal gangsters has.  In fact, its actions are very similar to gangsters, and its cruel repression of non-violent dissidents and political activists who argue for freedom, is deeply wrong, and culpable." 

The act of simply communicating such a statement as above, could earn you a jail-sentance in China.  And a good AI, (assuming you are foolishly communicating in plain-text), should be able to recognize the "political" nature of such a statement, and indentify who was the sender and the receiver.  Apple was forced to remove its VPN (virtual private network) products from the iStore in China, so the Chinese AI's that are already monitoring the internet in China, can continue to function effectively.

Those who choose to keep silent on this particular threat posed by AI technology, are not being honest.  There is zero risk that "evil AI's" will take over the world, or attack humanity.  That is pure nonsense - Hollywood fiction.  But there is real risk that AI technology - particularly the illegal mass-wiretapping technology already in operation - will be used by humans to hurt, punish and subjugate other humans.  That is the real risk.  VPN's and encryption technology are vital to the future of human freedom, otherwise we all run the risk of ending up like 2010 Nobel Peace Prize winner Liu Xiaobo - rotting in a government prison for speaking our minds, until our deaths can be engineered.  Not a nice outcome. 

We should use the new AI technology to help people, and engineer positive outcomes that enhance human freedom and expand opportunities.  And we must all work to prevent AI technology from becoming yet another instrument of social repression, and an amplifier of human cruelty.

[August 3, 2017] - Researching non-neural-net approaches to AI.. pathfinding, graph-theory, and more annoying gamestuff. (I don't like computer games.  Reality is much more fun.)  

Anyway, we finally are having summer-like weather, and the hot days here are lovely.  I have about 25 ideas to improve the NN-AI's operation - but most involve sourcing more data, which is looking difficult.  May have to buy a data-feed, as way too much on the internet is either fake, wrong, broken or deliberately obfuscated.

Did a weird thing.  Bought a new Windows-10 laptop, and hacked about with it to make it look sane, and be useable.  What an unbelievably difficult and annoying exercise!   The little machine came with "Windows-10 Home Edition", a deliberately crippled product (no GPedit.msc, for example), but via numerous registry hacks, and by turning off most of the Microsoft junk (does anyone anywhere actually want "Cortana"?), it is almost usable. (The "right click" feature on the "Start" icon in bottom left corner, (gets a sane menu for access Windows features, instead of the hyper-stupid "tiles" crap)  and the WIn+R option to bring up a "run" box (type in "regedit.exe" for example) really helps.  Managed to get original Excel and MS-Word installed, but it took *days* just to get the thing to operate in a sane manner.  If I get time, I will document what I did.  (Eg: Used "icacls" cmd to remove restrictions on the WindowsApps directory, so I could seen exactly what bloatware and other gunk was pre-installed).  I made extensive use of Google search.  (Everyone everywhere has had the same problems, and a lot of doc exists online about how to correct the damage Microsoft has done to their old Windows-NT product.)   Modern hardware is nice, but the commercial software is comically awful, and seems to be locked into a strange death-spiral of "kick the customer" madness... but perhaps that is just because I've seen better, and done many things  (like the old Queen song by Freddie Mercury - "I had a million dinners brought to me on silver trays...."  In my case, maybe "I've had a million problems brought to me... Consulting pays!" ).  Windows-10 reminds me of using datacenter-restricted IBM 370's running MVS, from the ancient awful mainframe world I first encountered as a tiny child.  We seem to have come full circle.

FInally got the little laptop working nicely, and our spreadsheet expert shut it down, except it has now spent over half-an-hour saying "Preparing Windows - Do not turn off the Computer", which means we are getting "updated", (again - it just did *all* its updates yesterday!), and this despite me turning off the "Windows Update" service using services.msc.  (Note: Later, I rebooted, and confirmed "Windows Update" has been turned on again, so even if using the Administrator account, Windows-10 does *not* honour the configuration setting in "Services" menu.)  FF-sakes guys.  A computer that does not honour its admin-set configuration settings is a broken machine.  Or it is a *corrupted* machine.  Why does Microsoft *hate* its customers so much???  I can't figure out why the whole Windows experience has been made so amazingly toxic and awful.  Is there an internal faction working to destroy Microsoft from within the company???  Sure looks like it. Big Grin  At least we got our expenditure and FX spreadsheets working again.  (for now...)

[July 27-28, 2017] - I'm down a rabbit-hole on Bayesian inference. The current Wikipedia page on this topic is not bad:  (until it changes...)   Also had lunch with a bright guy who suggested training the network to a specific probability, rather than just attempting to forecast a signed boolean.  And of course, there are a multitude of reasons why it (the NN-AI) is not working, with insufficent data being an obvious one. I am seeking to do the undoable, I suspect, and have to consider that a) my early work at Treasury, where we looked at numerous forecasting techniques formally, and never found any ability to forecast that improved upon the null-forecast (it will be in future, what it is today)  and b) this failure of the NN-AI to make accurate predictions => implies a high probability that there simply is insufficient data in the input stream to generate the outputs I have trained the net to see - ie. the direction of 4-day ahead significant (ie. greater than 1%) price change in a specific target price series.  

Knowing past price series and interest rates does not seem to let one improve upon chance when it comes to even getting a sense of future direction of price.   Why then can I do it by "gut feel"?   I've found if you watch and review everything, for years, and do so each day with care in how you observe, and limit your focus to a single, or a very few, target series, one seems to be able to train one's "necktop" neural net, to be able to make profitable bets on future outcomes - at least better than is possible by chance.   I wouldn't be here, writing this, and still able to pay my bills, if this were not true.   I also have a trivial simulator, that uses some really simple decision rules, that has long runs of losing trades, but manages to capture big moves, and continues to make money.  It annoys me that I see these results.   I believe we have entered a dangerous "new era" again, where advanced AI techniques are being used by the pros, and this is making for a very dangerous market where *everyone* will rush to the same side of the boat, at exactly the same time - provoking a monumental and rapid valuation re-adjustment (a warp-speed flash-crash perhaps).   Two things: just because I cannot derive an *edge* here, using the NN and boolean jumps, this does not mean that others have not.  (I believe others have, since I see the tripwires getting hit, and the very rapid reversals now all the time), and second: I've always been uncomfortable with the "frequentist" approach to probability (but I still use it), and Bayesian updating also just seems wrong. But it probably is not - we really have no choice, if we are to operate without certainty.   (I was reviewing the "Dutch Book" argument, or can you really create a "lock"?, and have just discovered the works of Ian Hacking (what a great name this man has! If you google for "bayesian inference Hacking"... well, you get my point). 

I've had failures and I've had successes - and in trying to do an objective analysis about what differentiates them, it seems the successes  were based on an ability to sense the course of future events, and then to have taken action.  (The failures involve mostly seeing the future event-space also quite clearly, but *failing* to take action, and then just watching my "movie-of-what-will-happen" that I saw in my mind, play out in reality, but without me having a position on.)  And sure, some of time I am just stupid wrong - but then I reverse - but never quick enough.  My worst errors are from listening to comments or beliefs of others, or from betting too big when I was really young and stupid (as opposed to now, where I am apparently old and stupid... :).   

Perhaps one has to move completely away from any attempt to "forecast" or "predict" the future, and just build a machine that is driven by the data-vectors of current events (those great 500mb Hadoop dataset chunks that Spark uses?), and makes immediate decisions which are then rapidly updated using a Bayesian approach, and a new probability is generated, which drives the next revision, which steps you forward from beginning of day to end of day.   To me, the way prices move now appears to be pathelogical.   Trading and investing seem to have taken on a rather toxic nature - and I attribute this to the fact that something like 70 to 80% of all transactions now in the major markets are machine-driven trades.   If a naive human trader participates, he is almost certain to lose now, as the market will run to his "puke point",  and take out his trade, no matter how he crafts his position.   This ugly behaviour of the equity markets is part of the reason that extremely low yields are accepted in the bond market.  See, something changed last year.  I can't put my finger on exactly what - but part of the post-Trump run, was due to this, not related to Trump at all.  

My sense is, that for the risk you take on holding equities, each of us should be getting *a LOT MORE* dividend yield than is currently the case.  And the market has a way of adjusting this parameter.  It just downfalls 20 or 30%, and then the dividend yields become attractive again.  What is interesting, is just how not-very-good, dividends are at characterizing the attractiveess of equity investments.   They are almost viewed as a badge-of-shame, indicating a mature, not-growing-much business that is unattractive, and I think that is wrong.  A good business makes money, and a good investment has a nice fat yield.  These crazy times of high-rates of technological change, are not normal.   And much of the change now, is not very good, and is not very nice.  When the printing press made books cheap, that was really great.  When the railways and steamships defeated time and space, that was just wonderful, and when the A/C electric grid was introduced, it made everyone's lives much better.  But now, the modern tech is just used to jerk people around, and take their money without really offering them much in return.   Modern technology just seems to be about creating the illusion of value, rather than actually offering anything that it better or more helpful.  Facebook destroys face-to-face human relationships, Amazon ends the existence of your local merchant's storefronts, and Google harvests all your data on everything you do, so it can end-run Madison Avenue, and assist the NSA in their illegal mass-wiretapping exercise to monitor and control civilian activity.   None of this technology is really very helpful to people, and none of these companies pay dividends - the attractive and economically useful way of returning investment profits back into the economy to be used to drive further business activity.  The smartphones are fun - but they further degrade face-to-face human interaction, and also seem to be creating an unhealthy culture of dependence.  The best phone I ever had, was a little Sony-Ericsson flip-phone.  When it broke, I replaced it with a very cool and slick Huawei running Android 6, but I really miss the little Sony phone. (FD: I also have a very nice Samsung Galaxy, which is small, and runs Kitkat, that last really good Android O/S).  But the Sony phone was better and nicer to use, in so many ways, that it is really rather funny.  Yet the world is hooked now on these annoying flat-screen things that function rather badly as communication handsets, and yet are too small to be nice tablets, or proper computer, despite the power they have.   And I just read a few days ago, about a company which offered to put security microchips surgically into peoples hands, to make it easier for them to move about the offices and buy food in the cafeterias.  The employees apparently have enthusiastically agreed to this.  What can I say?  "Get off my lawn"? Cool

[July 25, 2017] - Picture above shows Network Evaluation results for May 18 to July 21 period.  The neural-network cannot predict with any useful accuracy - results are basically slightly worse than random.  The little Tcl/Tk evaluation program is provided in the "Code" section.  I believe the NN-AI approach is useful and effective, and what it has shown here, is that there is not sufficient information in the data to forecast even the direction of change 4 days hence.  This actually confirms what myself and others discovered in a project done for a Government Treasury operation, back in the 1980's.   Reviews and analytic efforts directed at current data-series are of no value in predicting near-future price levels in an active marketplace, and it is not even possible to catch turning-points or the direction of future changes.  It seems it is only by possessing specific, market-moving information, ahead of other market participants, that any "edge" can be obtained.  (Of course, if you can see the order flow come in, and act before these orders hit the market, that is essentially the same thing as acting with prior-knowledge.)  What is interesting, is that the "null forecast" (ie. "It will be tomorrow, what it has been today"), always beats any active attempt to forecast.  I thought this might be different here, but for now, no joy.

Also, doing a crash course on Apache Spark (with side-detours into Scala and Hadoop).  Can't believe this stuff. Worse than TensorFlow - which looks great, but is runnable only after downloading and installing terabytes of related Java, Python and other such material.  Looked at some OpenText stuff which uses Apache Spark.  The code-bloat here is just off the scale.  Dig deep into the stuff, and you get down to JDK, SQL and R like everything else.  This is the same gunk that hasn't changed in years.   I've been considering calling this a wrap, shutting down the website, and going back to just making money by some really traditional methods that have always worked for me.  AI and machine learning seem to have a dangerously high bullshyte component (to use Neil S.'s great word from Anathem).  I know AI can work, but it's all about recognition, and hammering away with machine-clusters on great "data-lakes" of unstructured material, is not going to make anyone but the regulators and the software merchants, any money.  OpenText seems to have the right idea, in that they use Spark to sift thru gobs of crap-data that companies create that can leak PII (personally identifiable information) out into the public space (think SIN's, credit-card #'s, etc.), and help stop this leakage so the company does not get f**ked over by new European data privacy laws.   But a lot of the other AI promises look to be nonsense.  The data has to be structured (and clean!) if it is to be of any use.  (That's why AI only really works in games, where reality can be tightly bounded so that Talib's "ludic fallacy" can be realized).  But what I have learned from this project is that I need a lot more data, before I have any real chance to make accurate forecasts.   And I've also realized that I can code (into booleans) a whole lot more than just price-changes.  If you believe in "efficient markets", then everything should already be in the price - and so price change should be enough to get a good handle on the future.  But all the research shows that markets are not even close to efficient - and it is in the nature of the inefficiences that the money resides.  BMO just completed a 4 million share repurchase effort, with shares repurchased for cancellation.   Nice move.  Be a nice trade to step in front of, no?  But I only hear about it by reading the newswires after it has been completed.  "Information" is not homogenous - most info is useless blather and flatulent noise - but some is, or can be, critically useful.   Get that data, code it up as boolean strings, feed it to the NN for training, and your AI might be able to become smart enough to make a difference to your results. 

I posted the tiny Tck/Tk program into the "Code" section, that is used to evaluate the boolean table generated by the network.  (Creates the evaluation table shown above).  It calculates a simple "co-efficient of accuracy" by just counting evaluation training cases where the network got the forecast correct.  It counts anything less than or equal to absolute value of .8 as a zero. (The network has to provide an output value less or greater than .8 to have it counted as a minus one or plus one.  The three possible target values are -1, 0 or +1, so any result in the -.8 to +.8 range gets counted as a zero.  The co-efficient of accuracy is running around 23 to 27 percent, so I conclude the network is just not able to forecast at all.  What is interesting, is that if I forget to load the weights, and run the evaluation on a random network (where the network node weights are all just random values), the co-efficient of accuracy jumps to around 65% typically, as most of the forecast target values are zero, and most of the random-produced output values fall within the -.8 to +.8 "evaluate as a zero" range.  I think this is absolutely hilarious.  My trained AI only gets it right 1/4 of the time - but the "null forecast" (ie. nothing really changes - or any change is less than 1%) is evident about 2/3rd of the time!  This result jives exactly with previous research I did for a government department years ago.  We found the "null forecast" (ie. it will be in the future, what it is now), *always* beat any forecast provided by professional forecasters and economic soothsayers.  This is actually pretty interesting.    I have a suspicion that there might be something actionable here using Baysian probabilities, if I could just improve the network forecast to getting it right 40 to 45% of the time - still less than half, which would seem to provide no edge at all.  But if you know that 2/3rd's of the time, there will be no significant change, when you do get an indication of an expected price jump, if the costs and payoffs of the bet are sufficiently asymetric, it still might work over time to make a bit of money. 

[July 24, 2017] - I updated the data, generated the boolean-impulse casefile, ran the neural-net model, and looked at the forecast vs actuals for last couple of weeks.  The network just does not predict well. What does work, looks to be serial-autocorrelation strategies.  The target trends quite strongly.

[July 20, 2017] - Interestingly, the inability of the trained network to make accurate predictions for the 5-day ahead point in time is almost certainly due to the fact that there is not sufficient information in each training case, to make such a prediction. In other words, even the direction of change in the near future, cannot be known with any accuracy.  This is useful information.  It tells us that if we are to have investment success, we must target time-frames where we can effectively use current information to advantage.  That may well mean time-frames of hours and minutes, (we know that works) or months and years (Graham and Dodd show how that can work, too).  Throughout this exercise, I remained in a long position on the target security, which is now today trading at 109.47/shr.  Much of the percentage price improvement (which was not caught by the network), is the result of a recognition that a confluence of factors is at work - the improving position of the commodity-driven Cdn dollar (oil and gold cannot and will not stay cheap forever),  the slow but inevitable rise in the general level of interest rates (the recent quarter-point increase in the Bank of Canada rate is certainly only the beginning of the process of rate normalization), and the various fundamental indications that the valuation of our target security, (against baseline financial ratio metrics, as well as its peers), remains attractive.  With an attractive payout ratio above 40%, and a dividend rate that remains close to 5%, and a long historical dataset of dividend consistency, it is not difficult for an objective analyst to put a target price of $140 to $150 per share (Cdn$) on the target.  

I have a better understanding of why the "robot" selected portfolios are so attractive now to investment professionals.  In the same way that neural-networks can always "see" the image if it is in fact see-able by humans, it is probably true this technology - when applied to datasets that actually contain sufficient information to make an effective selection, and over a sufficient time-frame, will achieve accurate recognition, and make profitable choices. 

What this means in practice, is that I need to lengthen the time-frame significantly, and broaden the scope of the data to include fundamental information on market tone and target financial characteristics.  The 5-day time range is basically all noise - if you train to noise, you cannot get anything meaningful as a prediction.  But if you look out several years, and train your network to select for characteristics that are known (and must) have significant effect on ultimate target price, then you will almost certainly substantially enhance the network's effectiveness.  I am also pretty sure you can *shrink* the time-frame down to minutes, and basically have the network trade the order flow - and profit from essentially scalping the bid and offer range.  This is how the old floor traders made their livings - a few ticks on each trade, based on their reading of the marketplace.  Many reports suggest, for example, that just the volume-level of shouting in the room was a useful and actionable indicator.  One needs very high-speed tick-by-tick data, and the ability to execute rapidly, to even begin to test these sorts of network-driven strategies (the so-called "flash" trading models), and there is a lot of evidence that *many* groups are already doing this effectively.

From this work, I am now of the opinion that it is only by trading over multi-year time-frames, that the average, non-professional investor can significantly profit in modern securities markets.   The very-short-term remains the domain of the very well funded professionals, who have access to substantial capital and advanced-technology linkages, while the multi-year timeframe provides the non-professional investor real opportunity for investment success - if investment selections are made wisely, and monitored carefully.  The intermediate ranges - weeks to months - seem to be characterized by what I term "reactive noise", where occasionlly statistical arbitrage is possible, but catching the weeks-to-months intermediate market swings remains a process which is characterized by a high "noise" component, which makes predictability difficult.    What this means in practice is that if you are swing-trading and trying to catch local ups and downs, you are unlikely to make money over time, and in fact, run a high risk of being knocked out of an attractive position, at the worst possible time. 

Bottom line: The neural-network generated several sell-signals, but I elected to remain fully-invested (for reasons indicated above), as the target security advanced from the 104+ level to the 109+ level where it trades today.  On a 2300 share position, this $5/shr move has generated an $11,500 gain, over the evaluation period covered by the experiment.  Should the valuation of the target security move closer to it's peers (particularly it's US-based peers), then substantial price improvement would seem to be possible.  Given that the company in question has made a signifincant US-acquisition, it is not unreasonable that the market may, over time, assign a valuation to the target that aligns closer to its US peers.

[July 15, 2017] - The current experiment to use boolean delta-jumps as a predictive strategy has not yielded a particularly effective forecasting tool, but it does allow one to characterize the market, based on a particular picture-of-the-world that has prevailed, and as such, it provides a formal instrumentation of the current market situation. The formalism and methodology are sound, and an enhanced dataset (more than just 6 data series) can be expected to yield better, more fine-grained results. What I've done here is to develop a working proof-of-concept neural-network based AI product, which can provide market characterization, base on choices that can be made by each client. It's possible for a taylored, custom AI product to be quickly designed and implemented, specific to the views of a single client, which would incorporate a client-specific data selection. As we know, the major investors in New York are already doing this, and I believe the opportunity now exists for smaller firms and individuals to deploy AI methods. I suspect this may even up things for investors, and that a more level field will make a fairer and more effective market for everyone.

[July 12, 2017] - Formal evaluation of results:  Two networks were trained on 4361 cases, where each case was a 30 element signed boolean vector, derived by looking at price jumps of several different securities and commodities, training to a price jump in a target security 5 days hence.  Nets are V2 and V5.  On training data (the 4361 observations),  Net_V2 got 3893 out of 4361 cases correct (= .892685), Net_V5 got 3849/4361 => coefficient of accuracy of 0,882596.  Net_v2 seems to be the best network so far.   (Coefficient of accuracy on training data: 89.3% versus 88.3% for net_v5).  On the evaluation cases, so far, the networks are not performing well.  Their results appear to be worse than what could be expected from randomness.  On the data from May 11 to July 11, Net_v2 is posting 9/34 accurate forecasts, and Net_v5 is posting 7/34 accurate forecasts.  (Coefficients of accuracy: 0.2647 for Net_v2, versus 0.2058 for Net_v5).  I suspect the issue is that the boolean price jump data being used to train the networks does not contain sufficient information to know what the target price jump will be in week,  If a linkage could be established, I suspect the network training would have found such a relationship.  But what these results suggest is that knowing the price jump history for several days back, and across several different price series, is not sufficient to predict a future price jump - even if that future is only 5 days hence.   It suggests we need more data, across a greater number of independent components, if we are to have a better than even chance of predicting future price jumps.  A nice methodology, but no "edge" for now.

[July 8-9, 2017] - rebuilt Xerion completely on a Linux laptop from source, kept notes this time. The original build was experimental, and back in mid-March on AI box, and I did not keep notes.  You need the varargs.h file, and each of the "configure" files need to be fixed (they report syntax errors that suggest tcl.h and uts.h includes are not being found).  The fonts.dir file in the /usr/share/X11/fonts/100dpi directory has to be altered, so the "-adobe-courier-bold...70..."
 font can be found when the "bp-wish" program, in xerion/tkbp-4.1p2, is run (it brings up the Xerion gui screen).   You edit an existing reference of "-adobe-courier-bold .... 90..." to become: "-adobe-courier ...70 ..." (literally change the "90" to a "70").  This does not seem to impair any existing apps, and allows Xerion to pop up it's window, assuming you are running an X11 desktop of some kind.  Also, before trying to build Xerion components, you need to build and install tcl7.3 and its associated tk3.6, itcl extensions and tclX, all four of which are provided in the UofToronto's Xerion source tarballs.  Successfully built tcl/tk, and Xerion and its components and associated utilities on Fedora Linux laptop.   (See site top image which shows Xerion running on a Linux laptop & training against the initial boolean dataset.)

[July 7, 2017] - Last two days, retrained a different network on same 4361 observation dataset, where each day is a 30 element jump-delta vector from various market prices.  Interestingly, despite having the sumsquare error driven down to roughly same reported level (318 for V3 network, versus the 311 for the V2 network (from starting error level of typically around 3000),  Network V2 - the production version under active evaluation appears to do a better job.  The V3 network provides remarkably different results for the May 11 to July6th test range versus the V2 net.  The orginal V2 net seems to be much better in that it seems to be more accurate in its forecasts.  (V1 first network was not good - so original is V2, newer trained version is V3).  Montage of both results from "compareMarketNet" is shown in last screen display of "Code" section.

[July 6, 2017] - (Afternoon) Updated the "Code" section, to provide the tcl code that creates the example network I have been using, and loads the training cases, and shows how the iPad can be used to take the Xerion network weights and structure, and run boolean jump-delta vectors made from market price data, right on the iPad to get a go-nogo trading decision information.  This now provides a working prototype of a neural-network driven portable AI that can be built and trained using a large amount of real market data, but run on an iPad in real time, to provide immediate, actionable market decision suggestions. 

[July 6, 2017] - Experimenting with different back-prop methods (conjugate gradient, delta-bar-delta, momentum descent...) and different step-methods (fixed step with various epsilons, line search, slop search) to see which gives smallest error.  I'm interesting in interpreting the net's output as a gaussian that I can use for result-evaluation, and still not sure best way to do this.  The day-to-day results appear to be good enough to trade with, and it looks like this approach is offering a small, but viable edge.  This is key.    Oh, also a big tech result:  I downloaded newer versions of gtk, gdk, and glib, and compiled and built everything from source, on three older Linux platforms (two laptops and the AI box - need modern Firefox to find data...). Then, I downloaded the Firefox that is current for CentOS 6.6, which is Firefox-34.  Once you have Ffox v34 running, you can access modern JSON websites and such, and also jump to version 44, using the FIrefox upgrader.  Here is an interesting caveat.  Despite running a "./configure, make, make install" on the glib and gtk+2 sources, the rpm (package manager) was still reporting the old versions, and my binary-only version of Firefox34 would not load (error was: gtk_widget_set_can_focus symbol not found).  The solution was simple, and I stumbled upon it myself because I am stubborn, and confirmed that the gtk_widget stuff was being compiled.  Just nav to the dir where you ran the gtk compile, and run "ldconfig" to let Firefox find the libraries at runtime.     Modern Linux has dynamic libs (they load at run time), so even using yum to update "xulrunner" and "nprs" did not get the Firefox binaries running.
Oh, and this is critical: Compile your new glib first (I used an older glib ver. 2.26.1), and after the "make install" step, run "export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH" and then run your  "/sbin/ldconfig" to configure the dynamic linker runtime bindings for the graphics stuff, otherwise the ./configure step for the gtk+2 stuff will not run.  Can't remember where I found that, maybe on a StackOverflow post?  The default glib install puts the glib libs in /usr/local/lib (which is probably what you want to do, so as not to degrade your production gnome desktops..  Yes, this is a tad kludgy, but my stuff all just works.

[June 30, 2017] - Results for network run pre-market are another +1, so we now have three positive boolean one's in a row.  Market tone is weak and soft.  Target price was up, then down, in choppy action.  Network says to buy now, clear strong signal, three +1's in a row.   See the first image in "Economics-2017" to see output screen.  I also show the full Hinton Diagram as generated by Xerion display utility, for case June 29, 2017, the most recent observation.  The single white square on top in "Unit: Output" is the +1 boolean target value.  Minus one is a pure black square, and a near grey square is a zero.  Note the inputs and network output are signed booleans, ( "trinary data" ), but the internal network values can vary between -1 and +1, as shown by the Hinton Diagrams in the middle row of boxes in "Unit: Output".

[June 28, 2017] - Sourced the data, built boolean table, ran the network.  It toggled...negative to positive output, from -1.0 to .9996895.  So, that means price upshifts?  Price delta of target is already +0.82 of a dollar, by 9:39AM as I write this.  SInce I am only using retail-level trading software, I have zero chance of even putting on a position. Price will probably retrace.  But the methodology seems curiously solid, and with proper software, there might be opportunity here.   [1:30pm update]  Tweaked the AI box, and got everything working there, including Probablility Calculator and other modules, which can be driven from .PTB format data written by TSM.  Linkages between packages are file-based, but *everything* can now run on Linux.  Currently using older Fedora kernel, but CentOS 6.6 and 7.x look like they work ok with everything.  Wine compiled and installed fine on all machines, and TSM and MAKECASE run fine on my CentOS 6.6 testbed. Updated top picture, showing AI box running Xerion with Hinton Diagrams of unit values for June 27, 2017 datacase, TSM and a data-driven OHLC price-chart of target, cmd-line Xerion "bp->", running the GNUplot display of NN actual vs. predicted, and the Probability Calculator, (running in a DOSbox) which also provides risk-driven recommendation for position size.  The same module which runs the Prob. Calc, also calculates Hurst exponents, and a series of moving-average market characterizations and related graphic displays.  Having it all running on one platform makes data-management much easier and allows results to be obtained faster.

[June 26, 2017] - Determined I had a bug in last-date processing of raw price numbers into boolean data, and built a fix.  This gave me a proper result for Friday - one more data record on "tcasetab.txt" boolean exampleSet file.  The correct network output for most recent data (Friday, June 23rd)  was -0.999966, (Thursday was -0.533085) and I expected this only to flag the shift in price resulting from target going ex-dividend, but it seems the network did better, and caught a serious >1% downtick in market price, from the 106.11 level to 104.90 range, by noon on Monday, June 26th.  This could all just be randomness, of course I realize, but this technical approach is showing surprising, unambiguous ability to forecast future market price direction. 

[June 25, 2017] - Latest results, with data to June 23, 2017.   The network under test is a trivially small neural-network model, but the results are interesting.  See the "Economics-2017" section.  Target security goes ex-dividend tomorrow by 1.27, so we know open price will be 106.11, ceteris paribus, and will be down enough to trip a boolean in the jump-delta MAKECASE table.

[June 22, 2017] - The NN-AI (Neural-Network based Artificial Intelligence) device described here, looks as though it might be useful.  This page has become too long, so I put some of today's preliminary results in the "Economics-2017" section.

[June 19, 2017] - Put MAKECASEBOOL program inside of Linux version of my Time Series Manager (with it's nice Window's GUI), and built a bunch of little modal window routines to make it work as a full graphic user interface (GUI).  This makes it easy and quick to generate a boolean jump-delta table, and hand it over to Xerion, to have the network run against it.  Also, built (yet again!) an entire new data-sourcing subsystem, to merge/load yet another completely different data source into the TSM database, so that my time series data can stay current.   Information suppliers seem to change their formats every few months now.   The Lynx browser is used to pull data from various internet sources, while .csv format files can be downloaded from sources such as the St. Louis Federal Reserve.   Putting everything into a stable data management product (my custom-built TSM, in my case), and ensuring that the data is accurate, is the first, and most critical step in any data-driven research and analysis exercise.  Many research efforts and analytic toolsets use SQL variants to maintain time-indexed data, but SQL does not lend itself to maintaining and manipulating time-series data very well.  My TSM thing lets time-indexed tables and vectors be manipulated as single entities, and so lends itself to facilitating the training-dataset construction that NN-based machine learning requires as a starting point. Cool

[June 17, 2017] - Got Lynx Browser running - WITH OpenSSL (needed for "https:" pages, of course) - on Linux boxes *AND* natively, on the iPad (?! yes, really. Ran the configure and the make right on the iPad, and built a working, SSL-enabled Web-browser right on the tablet itself, no awful glop-bucket of Eclipse or Apple dev-goo with timebombs in it... Big Grin{#smileys123.tonqueout})  Lynx works pretty good, and will be useful (see the top line "GNUgcc & Lynx Browser" for details.  Also, re-wrote all the data-get routines in Time Series Manager, so I can get data again, and slotted in the stub for the boolean delta-jump table-builder, called MAKECASE.  Just need build into TSM some nice modal boxes to pick up the boolean table-build parms, and I have a cobbled-togther, hacketty-hacked (but real, and actionable!) prototype AI product. 

[June 15, 2017] - Massive powerfail at the lab.. A large tree fell on our powerline, snapped a power pole as the wires broke, and we went dark.  Within an hour, I had our electrics guy here with a new pole, and a utility crew removing+replacing the blown transformer by the garage.  (This actually happened on Monday, the 11th.)   All recovered within one day, but got me thinking about disasters.  Here is another:  Reading about Intel ME, and the "Silent Bob is Silent" exploit.  Bad stuff already in the wild, being used, it appears.  Two of my boxes have Intel ME, and today, I shutdown, and then re-booted a *powered off* machine, from the hacked iPad, via only WiFi access, just using Safari browser on the iPad logging into to port 16992 on a Win7 box.  The Intel AMT software is firmware, running on an ARC chip on the motherboard, and runs completely seperate from whatever software you have put as your O/S on the box.  Folks have disassembled Intel AMT, and the "Silent Bob" exploit lets you login and access the Intel-ME webserver even if your machine is plugged in, but powered off - and without entering the admin password.    The IntelME thing can be used as a packet-sniffer, and to access memory on the Intel box while it is running.  It's basically the "Clipper Chip", an ugly idea shot down over 20 years ago.   Read the tech details about it at this URL:   There are experimental "ME-Disable" routines, which flash part of the firmware with hexFF, but try to keep the BIOS BUP (boot-up) stuff:  but they can brick the Intel main-board, in some cases.  It's an ancient paradigm: we must take risks, to be safe... a lot like investment activity.   Training target swung from 104.70 to close at 106 even, as Cdn$ showed some firmness.  With ex-div approaching, and summer vactions being taken, things will likely get a bit twitchy.Blush Unsure 

[June 11-14, 2017] - Lots of volatility again.  But also, done some cool techie stuff.. Installed the latest stable version of "Wine" for Linux on the Linux boxes. "Wine" (WINdows Emulation) program suite lets Windows programs run on the Linux machines.  Big result, as MAKECASE is written for WIndows, as is the TimeSeries price database.  Both are now converted to run on Linux - along with WGNUplot, so my whole data-management app now can run on Linux.  Also downloaded the "openssl-devel" stuff, and rebuilt the Lynx programs on Fedora Linux and CentOS to use ssl (secure socket layer), so Lynx can run with "https:" access.  This was critical, as most financial datasites are ssl (ie. "https:") now, and Lynx is a text-mode browser than is used to pull in the data.  Seeing all my old code and graphics from Windows, run on the Linux boxes is quite surreal, as it all works well.  I used stable-release: "Wine" source from:     Note, the SHA256 and MD5 checksum hashes for the wine-2.0.1.tar.xz file are in photo near page bottom.  If you have Windows code you want to run on Linux, this is snappier than building a virtual Windows box, it looks like.  Image of my Time Series Manager (TSM) application with 12,362 row by 6 col. series (spot gold prices, 1968 to June 9, 2017), with linear chart and least-squares regresison line, via GNUplot at bottom.  The TSM product avoids spreadsheet stuff, and lets data series be manipulated easily and directly as single tensors.  Last image shows windows .EXE's, running directly on a CentOS 6.6 Linux kernel, using Wine 2.0.1

[June 9, 2017 - Friday] - The training target is up over 2% today, providing initial positive results to the real-time experiment this development project has become.  (Training target, and other financial equities are basically in a "run" mode.  Attempts to add to my existing position would now require bids roughly 3% above where initial +1 sequence began appearing, in the jump-delta current example set (the current May 11th to June 7th "TCASETAB.TXT" file of booleans, which the trained NN-AI has been interpreting).  It is quite possible that this favourable outcome is due to random chance.   (Training target is now up over 2.20%, just as I have typed this note.)   I suspect that the semi-strange market behaviour we now typically observe (ie. curiously uneven patterns of volatility - ie. no volatility for long stretches, followed by rapid spikes and retracements in volatility) , is due to the widespread use of AI and other algorithmic methods to augment trading and investment activity.  We may still be being fooled by randomness, but we are also much less randomly fooled, it would appear.  I have a strange sense that this modern market may exceed the excesses we observed during the 1920 to 1935 period.  If this is true, then DJIA at 35,000 to 40,000 range within 3 to 5 years, is not at all unreasonable or unlikely.  Rising rates will be associated with rising returns on capital, as is often observed in the historical record.  And the AI tools - as they augment ability - will also likely enhance the risk-preference profile of most participants.  The equity market may be the mechanism that puts more income into the wallets of consumers, so that consumption and investment can be given the demand-push that many folks think it needs to have.  What is curious, of course, is the low rate of indicated inflation.  But I think I know the reason for this also, and discussion of that phenomenon is well beyond the scope of this observational comment... Cool

[June 8, 2017] - Re-ran the NN-AI (Neural-Network AI) prgm with two more days data.  Enhanced the MAKECASE and MAKECASEBOOL utilities to allow the TCASETAB.TXT file of boolean jump-delta vectors to be more easily generated.  Mkt action suggests *many* other participants are already actively using AI methods.  What this suggests is that this methodology probably needs to be in everyone's toolbox.  Although a bit technically complex to pull together, if there is some predictive ability, it may be useful.  Certainly, the NN-AI approach is probably the best tool for trying to catch a turning point.  I recall a formal exercise, carried out by a Ministry of Treasury - in which I did the computer programming - failed to find *any* method that could successfully even indicate upcoming interest rate moves (and subsequent changes in bond prices).  We literally tried *all* known methods, and they all were ineffective at even catching major turns before they happened, much less actually predicting anything.    But this was before NN-AI based methods.  If the data is prepared properly, it appears there might some effectiveness to this NN-AI approach described here.  (FD:  I was not filled yesterday in my order.  Today, the target has advanced 0.60%, as I write this.  This approach looks promising.).   The first image is the screen-display for today's forecast: (Orange screen, top right, with boolean results for last 4 days, all +1.  As a real-time experiment, this suggests a long position in the target is indicated.)

[June 7, 2017] - Fixed a bug in MAKECASE program which was not handling the end-of-data construction correctly, simply branching away when it could not create the target (which is 5 days forward).  Fixed the program to provide valid training case data, with -999 as indicator that training target could not be constructed.  This lets me run the MAKECASE program for small subset of data (ie. the last 20 or 30 days), and produce correct boolean jump-delta case vectors right up to end of data, despite not having training target.  Obviously, this is needed in order to run "compareMarketNet" and see what the values the network generates, as these most recent values have the most useful, predictive power.  Network still says uptrend is predicted, as do previous curve-fitting prgms.  (Full disclosure: I put a small bid in just off the mkt, for a small increment to existing position..) [Update: 9:40 pm EDT: I did not get filled, which is traditionally a positive sign. Almost always, if I get filled in a stat-arb stink bid, I regret it.  Today, not being filled, suggests NN-model might be working.]

[June 6, 2017] - Top first image shows most recent results:  I spent time updating data to June 5 (previous day), and ran MAKECASE from mid-May to present, to generate current dataset to give to network.  Specifically, here is process to have neural network evaluate data: (start bp_wish, ie. Xerion)

How to Restart & Reload Weights & run a Xerion Network Against New Data...

BashShell > bp_wish          (start bp_wish [Xerion] from command line shell
> source     (this program just sets up the network, and
> set precision 17              defines the neural network structure, and loads
                                       the "tcasetab.txt" training case data into the
                                       variable MNTraining, and sets "exampleSet" variable
                                       to the string: "MNTraining" ...)
> set tcl_precision 17        (tcl_precision has to be set to avoid losing info )
> source compareMarketNet.tcl    (check results: "Actual vs Predicted" ability...)
> source plotValuesSML.tcl           (source the smaller "plotValues" tcl program)
> MNTraining size                         (check that exampleSet training loaded ok, 12 obs.)
> bp_groupType MarketNet.Output     (confirm nodes are correct configuration...)

> uts_presentExample MarketNet MNTraining 0         (present first example case)
> uts_activateNet MarketNet                                 (activate (ie. "run" the net))
> compareMarketNet                       (attempt to compare actual vs predicted. No good..)
                                                    (forgot to load the network Weights ! )
[ the results are random ]

> uts_loadWeights MarketNet MNnet40_v2.wtb    (load the highest-precision weights)
                                                                    (from binary format file...)
> compareMarketNet                       (this time, when we run this, we get sane results...)

[ the results as shown on the screens in first image ]

> plotValues MarketNet MNTraining    (creates Actual vs Predicted chart (see screen))
                                                     (Note that this "plotValues" is from the .tcl )
                                                     (program "plotValuesSML.tcl", sourced above... )
Hit Return to quit...

The results suggest uptrend in target value.


[June 4, 2017] - Light, clarity, perspective and focus - what we seek to have when dealing with complex situations where knowledge is obfuscated and obscured.  To see clearly the full panorama is often a luxury we do not always have.  Should we try to develop one process, which is slightly faulty, but which can operate successfully in most situations, or is it better to devise a more complex mechanism, which can adapt rapidly to a variety of situatons, but is more likely to be fooled by crafted countermeasures?  I spent the weekend at the Lake, mulling over these design questions...

[May 30, 2017] - Developed sAPL functions "Estab^marketnet" and "Actnet2", and after some headbanging, got the numbers right(!).. Wild Really quite a result. It is doable.  The AI-Augmenter is doable.  You can build a simple (but sufficiently complex to solve a real-world task), neural-network on a Linux desktop box, using Xerion, and then take the weights file and establish the same network structure in sAPL, and then activate the network, and get the same results as Xerion gives.  The function "Estab^marketnet" reads the weights file, establishes the network structure, and the fn "Actnet2"  runs the network (against training cases in var Example), and (as long as I remember to switch the default node transfer-activation function from logistic equation to hyperbolic tangent!), I can get the same numbers, running the net on the iPad, under sAPL - all in a less than 400K workspace.  It's primitive - but it works.   It also offers the possibility to design and develop a toolset that is unique to each researcher.  No information need be stored or maintained on any internet server.  If you use this AI methodscape, you can retain and ensure local operational integrity, regardless of what happens in the "cloud".  (Ever seen a thunderstorm up real close..?  That is what the future holds for us all, I suspect... You don't want to be dependent on an internet connection for your machine intelligence.  A little wee package can still have a useful little brain.  Just watch a mosquito.)

[May 29, 2017] - Developed sAPL functions "readfile" and "procwt" to read Xerion's .WTT file (the network weights), into sAPL.  Also wrote "tanh" to provide a hyperbolic-tangent transfer function, so I can activate (ie. "run") the trained network, on the iPad. Put the APL code in the "Code" section, for those who might be interested.

[May 26, 2017] - Included more iPad examples of what visualization graphics might look like for AI-Augmenter, as well as a bit of background info on attributes a network's training target should have.  Note: the full source code for Xerion is available at:  and the documentation is at:   You want to use the Xerion 4.1 version, the Xor2 network is trivial, but is a better "Hello World" exercise than the OCR digit recognition stuff TensorFlow suggests.  Note that the first url (the ftp.cs.toronto site) has all the Tcl/Tk stuff plus the Tcl extras, that you need also.  I may try and pull all my modified code together, and put it on the Github account.  Xerion runs under X-Windows, and seems to work fine under Fedora's Gnome desktop.  This is older code, but it is not burdened with a complex sack of dependencies (beyond the usual Linux stuff, of course).  

[May 25, 2017] - site cleanup - re-orged topline stuff, put economics images into Econ-2017, last year's market forecast ("Sept 2016 - Why the Stock Market May Move Higher...")  into Econ-2016, and the "APL on iPad" details in its own section.  If the new signed-boolean stuff has forward accuracy, I can create a preliminary version of "AI-Helper/Augmenter" on the iPad, using sAPL.  

[May 24, 2017 (pm)] - Re-ran with "quickProp" method, developed by S. Fahlman (see notes on picture).  Runs better (smaller error), and faster (less than 19,000 evaluations), and fits better.  (Actually, the fit is surprisingly good).   You can see I saved the network weights as both binary and text values ("uts_dumpWeights MarketNet MNnet40_v2.wtb" and "uts_saveWeights MarketNet MNnet_v2.wtt").    The website is bloated now, and I have to re-organize this page (I am getting red-flag warning messages telling me the site will load too slowly now...)   Apologies if it loads like a slug.   But the "quickProp" result on the signed boolean data, using a line-search instead of the typical fixed-step (epsilon of 1), shows surprisingly good correspondence between training data, and network forecast.  I wanted to get this posted, so people can see what is possible.  For me, this is basically a "Hello World!" exercise.  It is a simple network (30 input, 40 hidden, one node output), but even a simple structure like this, can yield effective, actionable information..  

[May 24, 2017] - Happy Birthday, Queen Victoria!  Re-designed the network, now running with twice the hidden nodes.  You can see the Xerion code I use to create and define the network in left-side window, cyan-coloured display, new pic..  Switched to using Xerion GUI version, which gives 1-button operation to some operations.  Running training now.  Total network error falls quickly, and ability of net to match input target looks better.  The 32-bit Intel box that Fedora+Gnome is running on works fine for this.   The tcl/tk interpreter is calling C programs (for the Minimizer), for the conjugate gradient evaluation and for the line-search.  It ticks along reasonably snappy.    Equity markets in Canada are choppy - analysts were not impressed by BMO earnings this AM.  ( much more money to the Cdn banks have to make, to impress people?  They are each earning at least $1 billion / *qtr*, and BMO just raised its dividend to 0.90/shr.  Fat profits and almost a 50% payout rate.  This is not good enough for you guys?  (BMO fell $3.00/shr in AM).   Crazy times.  "Money for Nothin', and Your Sticks for Free!", like Mark Knopfler and Snoopy used to say...

[May 21 afternoon, 2017] - Results...  Looks good.  This is a bit of a black art, it appears.  Using conjugate gradient, the training is faster.  But you want to use a line-search, rather than just moving fixed epsilon in the steepest direction, because directions can change a lot.  But eventually, the line-search fails, and one can go no further.  But then, you can switch the minimizer to using direction "steepest", and a very small fixed step, epsilon (0.001 or 0.0001), and just creep along the surface, like a blind man in the dark.   Not sure if this will really improve training, but I am still running with the overlapped data, each case only one day ahead, but with 5 day lookback for each series.  A good NN should be able to train to pure noise, if you let it run long enough, so the early line-search failures after only a few thousand iterations led me to suspect inconsistant data.  But perhaps the network can deal with the rolling overlaps.   The length of the gradient vector, |g|, is just hovering above 1, and training is continuing on the AI box, an old 32-bit Pentium running Fedora.  The screenshot above showing the last 360 "Actual vs. Predicted" cases for my boolean jumpdelta dataset, was generated just by imaging the Gnome Xwindows display screen with a little Samsung Tab-3 (SM-T310) running Android 4.4.2  (the old Dalvik VM).  Android 6.01 on a Tab-A runs better, and battery-life is vastly better, but the old Tab-3 running 4.4.2 is a fine piece also.

[May 20-21, 2017] - Update:  Got plotValues.tcl working.  Built trained network.  Shows Actual, vs "Predicted" booleans.  (see picture above).   Was not setting "tcl_precision" to max value of 17. (Default was 6).  Better training results now..  So, I have a trainable dataset now.  My Oxford Dictionary defines "naive" as "artless, amusingly simple".  Probably right.  In my naivety, I had thought I could use raw price data as input (despite scaling the Dmark data, years back in my first trials with this technology...).   Wrong.  Your input data has to be between zero and one (if using logistic activation functions), or (I hope), between  -1 and 1, if using hyperbolic tangent activation
 functions.   My attempts to train on raw price data, using an exponential transfer function on the final output node, failed.  Just doesn't work. The whole dataset would train to one value across all cases.  So, I had an idea.  I modified the MAKECASE function to create signed boolean vectors where -1 is down significantly, 0 is no significant change, and +1 is up signifincantly.  It runs with a filter-parameter that defines what "significantly" is - eg, 1%, 2%, etc.  Xerion lets me define the transfer (ie. activation) function as TANH, instead of the default LOGISTIC.   Tried this for both Hidden and Output groups.   The network outputs a result between -1 and +1 now, for each case.  Used "uts_groupType <netname>.Output {OUTPUT TANH SUMSQUARE} to config the final output node, and built a training case set as signed booleans.  (Xerion also allows "CROSSENTROPY" instead of SUMSQUARE, and also lets me create a cost.Model.)  The network now trains to a single signed boolean (trinary output).   Converted MAKECASE into MAKECASEBOOL, and wrote TRANSFORMBOOL fn to convert raw prices into a table of signed booleans.   This dataset can be trained, and looks promising.  But what I discovered, after only a fews thousand iterations (line search, steepest, very small fixed epsilon), is that I cannot train this data very well.  I cannot even get the sign consistantly right, before the "function is wobbly" message appears.  Now this is interesting, as it indicates the data is perhaps inconsistant.  (Using scaled price data, you can train right down to the noise, if you run your back-prop long enough.)   So, I thought about it and realized I am rolling ahead 1 day, and then taking previous 5 days of historical data, to create each training case for the Xerion "exampleSet".  In Edgar Peter's Chaos books, a similar problem was encountered with re-scaled range analysis (Hurst exponents).  You don't want overlap in the data, as it blurs the trials, and the overlap messes up the statistical property of independent exclusive trials, that I am pretty sure one needs.   If I am looking back x days in each series, I probably need to roll forward x days for each training sample.  I will try this idea.  I've tried several different network structures.  Just checked the AI box.   Training this time looks better.  I get long runs of several months, where the signs are at least right.  Much further work needed - but I suspect now that this approach has merit.  Typically, markets are *almost* random - but often exhibit periods of non-random behaviour for various time periods, when serious money can be made, just by taking a position, and then doing nothing. Jessie Livermore (pseudonym Larry Livingstone) was very clear in "Confessions of a Stock Operator", that he made the most money by "just sitting".  This seems to have worked for Warren Buffet as well.   I had a CP/M Z-80, when Bill Gates was starting Microsoft, and my first serious app for my new IBM P/C was written in MASM assembler.  But for some reason, I never bought Microsoft stock (too expensive?), despite telling folks that Mr. Gates would probably sell MS-DOS to every literate person on the Earth. (I did not foresee Windows.  Missed the class on the "Lilith" box at school.  If curious:   Buy-and-hold can work pretty good.  Best trick is to start when you are really young.  Have a good portfolio when you are still in your 20's.   You don't need "artificial" intelligence for that.  Just don't be unwise.   Anyway, this particular AI approach looks like it can perhaps identify (ie. "characterize") current market nature, and suggest when one might try to establish a position.  You might be able to use the old time-series serial autocorrelation stuff we learned in economics school to achieve similar results.  It works not bad for the bond markets, (they have high degree of serial auto-correlation), but I could never get any useful results for stocks, and it was dangerous as hell for commodities, given their characteristics of extreme reversals.  With commodities, you can make money for *years*, and then lose it all in a couple of ugly weeks when chaotic phase jumps happen.  (viz. the forward markets for crude oil, for example).  Even if you are right, you can still get killed if you act too soon.  Short oil at 120, knowing it is stupid-too high priced, (based on cost-of-production) only to be stopped out, at 130.  Do it again, and lose all your money on the final run to 140/bbl, before the massive reversal begins.  The best training for commodity markets is Vegas and Monte Carlo, as your key objective is to participate without suffering "gambler's ruin". (But that is a different model...)  

[May 17-18, 2017] - All nighters, days at Starbucks with the laptop... May is here, the apple blossoms are ou, and I am here, writing this...  Mkt is providing lots of thrills and chills - like an old Lou Reed song.   I built MAKECASE to construct the training cases, and have been trying to train down to a t0+4 price-point on a specific series, from a sequence of segmented series.  These combine cross-sectional and time series elements (basically 5 days history, across several different series), but reduced to a vector, for a specifc time point (one day).  I am now *certain* this process is driving markets in many areas.  This is dangerous, but mine is not to reason why.   It is difficult to train to a price target (I'm trying an "exponential" transfer function for network output - looks like a stupid idea, but I wanted to try it.)  I want to avoid working with "scaled" data, as I just find it annoying to use in realtime.   I have a simple network defined, (30 input units, 20 hidden, one output) in Xerion, and I can run a few thousand iterations, before the line-search fails.  But it does not want to work with un-scaled data.  I have only 4360 training cases - tiny by modern standards - this is really almost a "back of the envelope" exercise - but I found that simple stuff actually is pretty robust. (If you are bulletproof, you don't have to drive fast, right?).  

Anyway, I wanted to see if I could use the exponential transfer function to just train to a future price, but it just does not work well.  The exponential transfer function is typically used for "softmax" training (training to a 0 or 1), and also with "cross entropy" minimization, instead of minimizing the sum of the squared errors.  These options are configurable on Xerion.  My network is called "MarketNet", and one can use the command "uts_show {uts_net[0]} to view the details.  In the bp_sh (the back-propogation shell), you have all these command options (eg: to randomize the net:  "uts_randomizeNet MarketNet" will populate the network with random values before beginning training.)  You select the minimizer, give it a short name, config the search methods and step-sizes or types (the epsilon), and you can run training.  I wrote a trivial .tcl function, which can be sourced, to view the "target" from the training cases, versus the "output" of the network.  In "bp_sh"  (the Xerion/tcl cmd shell), you can then enter: "compareMarketNet", and get a quick picture of how well the current training attempt has worked.  i post some code and examples somewhere later, once I get this working right.

For the old stuff I did years back, I scaled the data between zero and one.  But you have to unscale it to use it, of course.  But I had this idea.  You really want probablilities anyway, so, I will modify my MAKECASE program to generate binary values:  0 => forward mkt value does not change much either way, -1 => forward mkt is down significantly, and 1 => forward mkt is up significantly. Then, the network doing this "softmax" training should basically give me a probability estimate of what is likely to happen to the specific price series I am using as my training target.  Looks like the trick is to use a hyperbolic tangent activation function, (values between -1 and +1), although exponential (values between 1 and infinity) is what is typically recommend for softmax-type training.

Oh, a little note on MAKECASE.  What a pain!  Initially, one thinks, "oh just build a big matrix, and slice it row-wise" to get the series day-segments.  But of course, all the series have *different* holidays and other off-days, so they don't line up.  MAKECASE has to select one series (in my case, I use the SPX), and then conform all obs to those active day values.  The logic requires that, for any given day, you look back a specified number of days, and a collection of these day-segments forms your training-case for that day, along with the target you are training to.  Turns out that is tricky, but doable.  But you have to process each series carefully, and check for missing data, and such.  What is interesting about this approach, is it should obviously scale, and be applicable in other areas.  One uses Hurst exponents (re-scaled range analysis) to deterimine if the data is trending, random, or mean-reverting.  It's surprising how many Hurst exponents are right around 0.5 now (its on the Bloomberg, has been for many years...).  But just because the series looks pure-random wrt to itself, does not mean that it's cross-elasticity is not a factor wrt other data vectors.   (The danger is of course, the illusion of linkage, when none really is present.  But the flip-side is worse, no?  You have a pretty clear linkage, and you miss it, leaving all the money to be hoovered up by flash traders.  Cool)

[May 11, 2017] -Still messing about with what to train to.  I don't want to just forecast, I want a more subtle picture of the future, where the AI can suggest the nature of conditions.  I am thinking I probably need to train to a generated boolean vector which can be interpreted in some sort of quasi-probabalistic way.   Playing around with ideas in APL...

[May 7-8, 2017] - Enough data-cleaning to have a simple prototype soon,  I hope - by tomorrow or the next day.   I read this Issac Asimov short story when I was very young, about a group of scientists working on force-field technology, and they started having mental breakdowns.  Once scientist suggested that humans were just lab test creatures, and that in the same way we ring-fence dangerous bacteria cultures with a circle of antibiotic (penicillin - C9H11N2O4S), humans were ring-fenced by those running the experiment, and the problems the team was facing was due to the potential effectiveness of the force-field technology they were working on.   The technology would allow protection from nuclear weapons.  The lab-rats-in-an-experiment was the idea of the lead scientist on the project, and his "psychological penicillin ring" theory was accepted to keep this key guy working, despite his delusional state.  It was a great great story, because it contained a unique theory of evolutionary human development that linked technological progress with progressive social jumps.  I searched thru my old books, and found the story, and the paperback cover is shown above in the ISS HDEV picture.  It was "Breeds There A Man...?", first published in 1951 in Astounding Science Fiction (now Analog).   Sometimes, I feel like similar things are occuring on this AI project.  I am beset by curious events, which constantly prevent me from making progress.   Rain, which fell for several days, and flooded the fields.  Trees, which were uprooted by winds, and hung at 30-degree angles over the power lines into the lab here.  (I took them down myself, with winches, a tractor, a series of ropes and pulleys, and a chainsaw..)  And yesterday, the machine running the Xerion Dmark demo crashed as I was cleaning a wad dust from its front (I touched the boot switch?).  And the awful mess of the data - full of missing observations, many more than I realized.   But, I rebooted the Xerion demo box, and ran the network to train down to an old sample segmented time-series,  got GNUplot and GS (Ghostscript) working right, and confirmed I can build my training-case file, and run a "compareNet" and generate a visual of actual vs network training target.   I compared "fixed-step" (using an epsilon of 0.1, versus a line-search (in Xerion, "Ray's Line Search"), and the training times (in both cases, using conjugate gradient direction method), and the training drops from roughly 70,000 iterations to around 400.  The heuristic algo seems to be run line search on conjugate gradient for the first 300 or 400 iterations, then switch to a fixed-step, and you can train right down to noise, if you want to.  This technology works.  I suspect TensorFlow has all these kinds of intrinsics probably just built-in.  This is why I am stepping thru the process using the older Xerion code, so I can try to get a "through the glass, clearly" feel on how it works.  The original quote is biblical, is it not?  Something about "through a glass, darkly?" [Edit: yes, it's first Corinthians, verse 13:12. And it's also a great 1961 Bergman film.. ]

[May 4, 2017] - Still doing data-cleaning... Also, I download all the "Mplayer/GMplayer" code, and built "MPlayer" (and the desktop GUI version, called "GMPlayer") for my CentOS Linux box from source.  You can get the code I used here:  This version is from 2016-01-24, and includes FFmpeg 2.8.5 in the tarball. To build MPlayer, you can create a source directory /usr/local/src/mplayer, download the tarball to that directory (I used MPlayer-1.2.1 as it looked quite stable), run gunzip to unpack the zipped file, then "tar -xvf" to untar the ball and create MPlayer-1.2.1 source directory structure. Then, you just cd to it and do the usual "./configure", then "make", and then "make install" from a command line shell.  Make sure to include the "--enable-gui" parameter to "./configure", or you only get a CLI (command line interface) version of mplayer.  When I tried to configure, I got a message saying I needed "yasm", which turns out to be an open-source assembler that mplayer uses for some of its lower-level stuff.  So, you go get "Yasm", do the same exercise - create /usr/local/src/yasm directory, download the tarball there, unzip with gunzip, untar with tar -xvf, and run the three cmds, ./configure, make, make install.  That should install a working version of yasm.  Check it by entering "yasm --version" at command shell.  Here is Yasm url:  Having an open-source assembler might turn out to be useful here.  MPlayer, of couse, is used to watch video files, or listen to music files.  The program "gmplayer" pops-up an on-screen controller, which can be used to choose files to play, and/or create playlists.  You have to setup a default "skin" to see the gmplayer controller, which involves another download of a tarball into /usr/local/share/mplayer/Skin (I got Blue-1.12, at this url: , and used "bunzip2" to unzip the ball.  Then, I had to copy the the contents of the "Blue" directory that was created, into subdir called /usr/local/share/mplayer/skins/default, in order for gmplayer to actually work.   This process builds executables mplayer and gmplayer in /user/local/bin.  Create a launcher icon on your desktop to run "/usr/local/bin/gmplayer", and you will have one-click sound and vision!  There is method to my video-madness:  Ideally, I would like to have my neural-net take input from a real-time market feed, and output a real-time video display which would augment one's our own ability to develop a market "picture". Cool

[Apr. 30, 2017] - (Doing this edit on my CentOS box, running Linux.  Works good.  Same box as I run my Rails webserver on, which keeps track of news-stories in a little SQLite database. Using Linux feels like being let out of that Apple-Microsoft jail..."Free! Free at last!" )  So, I built the MAKECASE program, to run thru my little database of time-series tables, and build a single table where each row contains a vector of data observations (mostly prices) for a given date.   MAKECASE takes a single vector of series numbers, and returns one big table, where each row is a date, followed by a bunch of observations, one from each series.  For the old Dmark stuff, I scaled the series-segmented data to fit between 0 and 1 (trains better, given the sigmoid transfer function), but now, TensorFlow can use logit, which has faster rise-time, which might be better.  Will try a first version on Xerion, without scaling.  Then will attempt to replicate same training using TensorFlow - as my first attempt to use it for something real.   MAKECASE still does not create the training target, which will be a "market characterization vector".   I'm thinking maybe take one key portfolio element, and cast its direction, intesity and dispersion, and try to train to that.  Or maybe a percent delta of price beyond a noise-filtering threshold?   The real key here is keeping the data clean.  And I should have *two* datasets, so I can see if I have just trained down to noise (you know this, if you train fine on the first, but executing the 'net on the second set does not show any success beyond randomness.)    Or maybe I should just reduce the training attempt to a simple binary value: 0 = do nothing, and 1 = take a long position for a specific time window.  How far should MAKECASE try to look ahead to code its target?    I suppose ideally, you could let the network look at *everything* by *every time lookahead", but I want to narrow down to something specific, so I can evaluate its effectiveness, and value.  Collapse the "market characterization vector" down to a single riskoff/riskon binary value?   That way, I am not trying to forecast and I may get something useful.

[Apr. 27, 2017] - A data-provider I use to keep a database current, disabled their traditional .CSV access methods, and replaced this simple tool, with an interactive process that creates .GZ files for download.  So, I had to re-write the data-retrieval method I use, creating my own little script-driven robot to access the data and unzip the .gz files as required.   Everything works again, and I can move forward on the AI neural net tools.  Will create a first-pass of the database inversion tool, to prepare the cross-sectional training cases, which will train to a characterization vector, created from near-future events.  In this way, I hope to sift out true-trends from the market noise.

[Apr. 09, 2017] - Google's "DeepMind Technologies" group in London have just open-sourced their "Sonnet" product.  This might be a big deal.  Sonnet sits on top of TensorFlow, and lets it be used to create complex neural networks more easily.  I am interested in trying it.   I've had a large tooth removed, have more dental work scheduled next week, and have to do a lot of tax work to file personal and corporate income tax and HST forms for farm and firm. Dealing now with pain...  [Update:]  Just went thru the *DeepMind* website, quick scan of their Github stuff, & read their paper on "Learning to Learn".  Imagine if US rocket-pioneer Robert Goddard was transported to a 1990's launch of the space shuttle - that's how I feel after this quick scan. These guys look like they own the AI field now, especially since they have the resources of Google behind them now.  They look to have infinite power, both in CPU cycles and cash!  Oh my... Crying

My only chance here is these guys like chess I *hate* chess with a passion - as I detest most closed-environment gamey stuff.  Game-playing is time wasted.  All the interesting stuff and the stuff that matters - that makes a difference to the future, and drives humanity forward - lies in the open, *unbounded* realm of the pure real - the place where neural networks often collapse and fail badly, typically.   But you can use NN technology to *augment* human intelligence - like lenses can help your eyes see better, amplifiers can let you hear better, and computers can let you organize and process information better. (And yes, like a M1911 .45 can be used to punch a hole thru your advisary better than your fist can - let's be honest.)  In a formal system that is tightly-bounded by rulesets, and distributions are known, a well-built AI will *always* win.  What about open scenarios, where there are no formal rules, and the rate and intensity of change itself is also dynamic?  Can an AI help?   I am pretty sure it can.  And I think I know what it has to be able to do.  The AI does not replace or overwrite the human agent, it augments his ability, and lets him make better decisions, quicker, and with less of the errors that behavioural economics shows us *really* do occur.   I'm not in this for the money.  I want to prove a point, more than anything, and build a device.  We need AI technology like soldiers need rifles.  This technology could aid us all by letting us make fewer mistakes, and avoid the "Wee-Too-Low! / Bang-Ding-Ow!" outcomes that are become increasingly common in our modern world.  Perhaps I still have a chance... Blush (I put a picture of my primitive Analyzer tool output, essentially a first cut of the Augmenter I envision , running on a Samsung Tab-A, under Android 6.01, at screen bottom.  It shows the M-wave Analyzer output, calculated and displayed on the Samsung tablet, and an estimated probability density, which suggests trade size for a given risk level.  It essentially suggests how big you should bet, given the risk level you want to accept, and shows it all as a picture, so you can see exactly what you are dealing with, given the data-range you believe is appropriate for the current picture-of-the-world your necktop neural network tells you is now in play.  You can see where I am going with this, yes?)

[Mar. 31, 2017] - Got Xerion running with original late 1990's data (Dmark segmented time-series network).  Ran with many different types of training - confirmed it all works.  Xerion looks to be predecessor product to TensorFlow in many ways.  Using simple steepest  descent (standard backpropagation), with fixed step and epsilon of 0.1 it can take about 90,000 itereations to train down to the noise in a segmented timeseries. But use a line-search, and conjugate gradient with restarts, and you can get to the same level of training (essentially, just overfitting a timeseries to check limiting case of training algorithm), and Xerion will fit to the curve in about 300 to 400 iterations.  It's a pretty dramatic difference.  My original approach was quite wrong (using a single time series, segmented into cross-sectional training cases).  I have a new idea, based on current practioner methodologies, that looks to be much better.   I'm having arguments with a PhD type, who thinks NN tech is useless for market phenomenon (he is a "random walk" believer, it seems), but given modern state of the NN art, I am pretty sure my new approach can be useful.   I note with interest that Dr. G. Hinton (Xerion & TensorFlow AI academic guru), and Edmund Clark (former CEO of TD-Bank in Canada), will be setting up a new gov't funded "Artificial Intelligence" Institute in Ontario, based in Toronto.   Two new charts at page bottom - as Ghostscript image of the original Xerion-driven DMark series (raw price data scaled to fit between 0 and 1) training versus network output, and today's Cdn dollar chart - showing the complete NON-RANDOMNESS of the modern markets.    Markets are not random, they are chaotic.  The "random walk" picture of the world, where you believe in stable distributions, and build models that use distribution-tails to estimate your risk is wrong.  It has already given us the 2007-2008 financial meltdown.   Today, the Cdn-dollar chart looks like the output from a square-wave generator.  It's not random.  It is just one example of many that you can see *every day* in the markets. 

I've been stepping thru backpropagation by hand, using basic partial differentiation calculus, and the chain-rule, just so I can clearly understand the original idea.  I learned some C++ also.  Downloaded Alan Wolfe's NN sample code, only to find it won't run on my Linux CentOS boxes, with gcc 4.4, because of some new loop-construct recently invented and slotted into Clang or LLVM or whatever the heck the kids are now using - something from C++ 11 or 17 or Apple's lab.  More reading to do. This project is taking on a life of its own.

[Mar. 24, 2017] -  Completed prototype of neural network definition and activation routines in APL on iPad.  Great having a working spec - trivial Xor2 net - can train it on Xerion, and activate/execute the net on iPad using APL (which is great for matrix stuff).  See page bottom for picture.  Numbers match, Xerion in Linux, iPad using APL, for trivial toy case of Xor2 network.

[Mar. 17-20, 2017] -  Working on "cross entropy" idea, which drives how artificial neural-networks are trained.  The idea is that the initial (actual) probablility distribution is mapped, by the artificial neurons in the network, out to a posterior target distribution - and that there are different entropy characteristics across the various possible target distributions.  One seeks to minimize the "Kullback-Leibler divergence" or the entropy difference between the initial and the posterior distributions.  This sounds quite complex, but if you are using "one-hot" encoding (for example, trying to identify written digits), and your initial disribution is simply "0 0 0 1 0 0 0 0 0 0" - ie. your number is a "3", then the cross-entropy summation of the initial probability distribution values times the posterior generated distribution - boils down to taking a single natural logarithm of the sigmoid or logit value (ie. the probability-like number between 0 and 1)  that the network generated.    You can use a gradient descent search to drive your back-propagation, but the "stopping point" of the network training will be when all the cross-entropy values between the initial and posterior probability distributions are as small as possible.    It should be possible to make your network "recognize" with a high level of accuracy.  This recognition can extend to more than just written digits.   One should be able to create an artifical "Helper", that has superior recognition ability, for whatever you train it for, given you can "show" it enough accurate raw data - what we used to call "training cases".   I suspect "Helper AI" technology might become a must-have tool as we move into this brave new world.  (I really wanted to get a TensorFlow AI running on my iPad.  My vision for this was Issac Asimov's "Foundation" series - where Hari Seldon had this "probability calculator" at the first chapter, set on Trantor.  I can't get Numpy to load thru to Python yet on the iPad, but looks like Xerion might work...)  I am thinking of asking a Japan company to design a special Hyper-tablet device for me - but running *pure* Linux, no Android or iOS stuff in the way... 

[Mar. 14-15, 2017] - Fell down a big rabbit hole. Decided to look at my old Xerion stuff, and got obsessive about it and decided to convert 20 year old Uts/Xerion to run on a modern Linux box.  Xerion was the U of Toronto product built by Drew Van Camp and others, offered by Dr. Hinton's group to Canadian industry, as it was funded by a gov't grant process.  I took it and ran, and built a Linux box using Slackware Linux just to get Xerion running, and build some neural-nets to investigate time-series data.   As I dug deeper into TensorFlow/Python, I realized it looked a lot like UTS-Xerion/Tcl/Tk+itcl+Tclx - which I know well.   Learning is all about jumping from one level to another.  Getting Xerion running on a modern Linux has been a bit of work. (Just getting a clean compile of the code using a modern gcc was non-trivial) .  But I can run the original Xor2 example and it all seems to work well.   Having Xerion running will be very useful, as I can verify TensorFlow builds against original Xerion efforts.  Xerion is not convolutional, but it did offer a number of alternatives to basic gradient descent, which - in the example of training a boolean net like the Xor2 example - can be shown to be useful.  It's also a good learning tool, with nice visualization.  (Screen shot of Uts/Xerion is below..)  (Mar.15:  Fixed a bug - Network Node Unit & Link Display not working, fixed.  Built Xerhack, a visualizer toolkit uses Tk Canvas.)

[Mar. 8, 2017] - Got image hacking stuff working in Python on both Mac OSX and Windows.  Took the Ripples-in-Pond Tensorflow example, and made it look more like exploding stars in a dark star-field.  Runs *without* IPython, Jupyter and Python Notebooks (displays 5 images in sequenece as .jpg files, uses SCIPY and Pillow version of PIL (the famous Python Image Library)).   Images are interesting - like a star-field evolving over giga-years (see picture above.)   Here is part of the code:  (Click "Code" in top menubar for the rest of it...  Big Grin)

    # --- the Tensorflow LaPlace Image example (Uses PIL, and scipy.misc)
    # --- Modified: Mar 7, 2017 - by MCL, to just use image file display
    # ---                                       instead of Python Notebooks, IPython, etc.,
    # ---                                       with negative damping and darker image backgrd.
    # ---                                       (Instead of ripples in a pond, we have
    # ---                                       exploding stars ... )
    # --- Produces Initial image, 3 intermediated images, and the final image
    #     as .jpg files. Requires only: tensorflow, numpy, scipy and Pillow
    #     and Python 2.7.10.
    # --- This example taken from Tensorflow Site:
    # ---                           
    # --- and provides a nifty example of manipulating n-dimensional tensors.
    # ---
    # --- For Python newbies (me!):   1) invoke Python in terminal shell
    # ---                             2) >>> execfile("")
    # --- focus on understanding exactly how Tensorflow is reshaping tensors
    # ------------------------------------------------------------------------------------------
    # --- Import libraries for simulation
    import tensorflow as tf
    import numpy as np

    import scipy.misc

 <<< The rest of the code is in the "Code" section. Just click on "Code" on top menubar >>>



[Mar. 1, 2017 ] - As mentioned previous, I have Tensorflow + Numpy running on Python on the MacBook OSX now, and have got TensorBoard to show node names finally. This is the first trivial W = m * x + b (Linear Regression) program one can run, using gradient descent method to do the least-squares regression line. I've updated the two pics showing TensorBoard's display of a process graph for linear regression (now with variable Names!), and the Python+Tensorflow code example.  I've also posted these to the GEMESYS Facebook site.  Next, I want to 1) create a very simple neural network, and 2) read a real data data file of training cases, and produce some real output to a file. There is a lot of useful information on StackOverflow and various websites built by clever folks.  I've learned a bit just reading the StackOverflow queries.  I was sold on the NN methodology in the 1990's.  Xerion used Tcl/Tk to provide visualiztions, which I used to develop in (and still use!), but I typically ran my networks in background mode, and used GNUplot and APL to chart the prediction curves.  I have these old C programs I used to chop up data series, and I am itching to drop some of the old training files into a modern Tensorflow net.

[Feb. 24, 2017]  - Tensorflow is a bit more involved than Xerion, Prof. Hinton's U of Toronto product from many years back.  Here is my first hack, getting the basic tutorial running, with a trivial linear regression, and viewing the graph in TensorBoard, which one does using a browser session to localhost, port 6006.  To get the graphic working,  you slot in the statement "writer = tf.summary.FileWriter('/Path/to/logfiles', sess.graph)", before you run your training.  This writes event log data for model structure to TensorBoard log file directory, and the visual image of your model to be generated.  Very, very cool.  I put two images at *very* bottom of page, one showing the program text for my modified version of the TensorFlow "Getting Started" tutorial with simple linear regression model Y = m * X + b, and the generated TensorBoard model structure image, which is viewed using Firefox browser on the Macbook.

[Feb. 21, 2017]  - Ok, got it. Finally got TensorFlow installed and working. Gave up on the Linux box, as it runs some production stuff on news-articles that I need.  Used the Apple MacBook Pro with Yosemite (OS X 10.10.5), which had Python 2.7.10.  Was a complex project, but got it running.  Apple had Python 2.6 running by default, and I had installed Python 2.7 with "numpy" (the scientific numeric package for Python - its just the old Fortran math libraries, which I used to use at Treasury for bond-math calcs and econ-research).  Had to get the Python "Pip" program working, and first install of TensorFlow with Pip smashed everything, due to a flaw in pyparser stuff.  Had to manually fix a Python prgm called "" in directory /System/Library/Frameworks/... tree, as well as disable the original "Frameworks" located "numpy" and "six" modules.  This was critical.  The TensorFlow Python-pip install caused pip, easy_install, and the lot, to kack fail bad.  And the Frameworks directory tree Python modules (some Apple standard?) caused Python to always load the old Numpy 1.6 and six 1.4 versions - and TensorFlow needs 1.12 Numpy and Six version 1.10 or higher.   Until I fixed the "" parser stuff, and disabled the Apple-located default numpy and six, TensorFlow complained about wrong versions. What is silly, is that "Pip" (the Python Install Program), drops the updated modules in other dir, and until the ones earlier up the path are removed (Eg. from numpy to  numpy_old), Python keeps loading the old ones, even after one has run pip and/or easy_install, to load in the new ones.  I put a note on StackOverflow and posted bug and the fix, on Github/Tensorflow, search for Gemesys.  Bottom-line, is I was able to run the baseline TensorFlow tutorial, and make it print 'Hello TensorFlow!'

[Feb. 19, 2017] - I hate Linux dependency issues. Tensorflow requires glibc 2.14 and my CentOS 6.6 box has glibc 2.12, etc. etc...  TensorFlow wants Python 2.7 (or 3.5), but CentOS 6.6 is default Python 2.6.6, which "yum" needs to work, so I have to try virtualenv, or whatthef*ckever.   I've tried several tricks to get TensorFlow running, but no luck even on the Linux box.     I had hoped to put some datascience stuff on the iPad.  I have APL running, and GNUplot can do non-linear regression, but I was hoping to make a neural-net that could be trained on a GPU-Nvidia type Tensorflow box, and then just run on the iPad.  So far, no go.

[Jan. 27, 2017 - Started working with Tensor Flow, trying to doing some gradient descents across a custom phase-space.   I attended Jeffery Hinton's Xerion lectures at UofT back in the 1990's, and I built some neural nets using Xerion NNS to backtest commodity markets.  They worked, actually, and I had planned to publish a page on Xerion and Tensor Flow...  but I got very ill - some kind of flu thing which involved a 'cytokine storm'.   I'm recovered now, but it was touch and go.  Wanted to publish a page with a running Xerion net (or Tensor Flow example) being back-propegated, on the iPad.  Apple is a serious monopoly, and AI is real and perhaps dangerous.  The idea is to have a hand-held device that can provide real-time decision-support, but is not connected to any data link - what used to be called "air gap" security.  [Note: It is estimated that more than 70% of all trades on equity markets now are algorithmically driven.  If built right, they provide a real edge. ]  For info on air-gap security, read Bruce Schneier's piece here:    The Dow 20,000 thing is a bit of a concern.  There may be too much digital cash floating around.  Historically, the markets have been very effective at removing excess wealth.  If interest rates move up quickly, equity markets could fall 20%.  That is DOW 16,000, and it may happen at "internet speed".  The current stability may be a dangerous illusion, as powerful forces pull our world and its markets in divergent directions simultaneously.   ]

[ Dec. 13, 2016 - Got "DOSpad Math" compiled and deployed successfully to iPad 2, using Xcode 6.2.3.  Insane work. Also, updated "Time Travel" page with Harlan Ellison montage. (Click "More" button on top line right to show "Time Travel Research" page) ]

[ Dec. 7, 2016 - OpenWatcom Fortran77 on the iPad  - details ]

[ Nov. 28,2016 - Included info on how to get Python 2.7 running on iPad ]

[ Nov. 03,2016 - Added page: How to put VLC 2.1.3 on iPad-1 running iOS 5.1.1 ]

[ Oct. 23,2016 - Added page on "GNU gcc" = How to compile & run a C program on iPad ]

The Hack Which Launched this Site...

I put this website together after I hacked my old iPad, and felt I should publish the method, as it turned the old device into a very cool experimental platform, and a surprisingly useful research tool, as it is possible to obtain most of the Unix/Linux utilities from Cydia, and configure Safari to be able to directly download viewed content (eg: videos, .PDF files of important documents, etc.)  As well, there are application hives, or "repos", which offer very useful utilities, such as "iFile", which allow navigation of the native file system.  (One uses Cydia to add "sources", such as "" and "" to gain access to these additional applications).   (Further, if you use static IPv4 numbers on your local WiFi-enabled LAN, you can seemlessly transfer files between the iPad and either Windows or Linux machines.)

I've provided detailed instructions for "jailbreaking" the original iPad.  Once the iPad was opened up using the "Redsn0w" software,  Cydia was used to obtain *root* access to it.  It is our belief that *root* access should be provided to all devices owners, if they request it.  ("root" is the User-id that provides full, administrative control in any Unix/Linux system.  It is like the "Administrator" account in Windows.)  It is a lawful act to obtain this access - known as a "jailbreak" - for any device which you own.   And by doing this, you can open up the range of applications and technologies that the device can address, regardless of the restrictive trade practices that device makers employ to limit the capability.

Once the iPad was unlocked, and SSL and SCP were configured and made available, I was able to install sAPL and APLSE on it.  I also installed Borland 3.0 C++, and compiled the Blowfish encryption algorithm, to confirm that DOSpad (the PC-DOS emulator available for the iPad) behaved correctly.  The generated .EXE files for Blowfish on Android with gDOSbox, Windows XP/SP3 CLI (Command Line Interface), and those compiled on the iPad under DOSpad are all isometric. 

I've also built and deployed thru the Google "Play Store", some interesting apps on the Android platform.  These include gDOSbox, GNUplot37, and several APL interpreters.  The Android software is experimental, and does not contain any usage tracking or in-app advertising.  I did this project mainly because I wanted to run a real APL on a tablet, as APL was the first language I learned, at University of Toronto and Univesity of Waterloo. 

APL was (and is) unique in that it provided real-time, interactive computing, before the advent of personal computers and tablets.  Ken Iverson, the inventor of APL, originally developed the language as a notational tool to express algorithms.  IBM then took the idea, and built the interpreter.  Personal computers - which ran only APL! - were developed and sold by IBM in the early 1970's. (A prototype was made available to some clients in 1973.  It was a complete personal computer - called "Special Computer, APL Machine Portable" (SCAMP), and it ran APL.)  For those of us involved in computing in those early years, APL was the only real-time, interactive computing environment, and it was the first desktop, personal-computer system, as well.

So I just had to put APL on these little tablets. Big Grin

The website here is a work-in-progress.   It consists of:

  - APL on an iPad  - the notes on how to hack the iPad, and open it up to installation of non-Apple/iTunes software.   Also includes a link to my github site, where a zip file of the P/C version of sAPL files can be obtained.  sAPL is freeware, and can run in "Cmd" shell on any Windows machine, as well as Android gDOSbox, or iPad DOSpad.  (See below)

  -  GEMESYS Apps on Android - just a short summary.  This software is experimental, and is provided primarily for educational and recreational use.  Google keeps changing Android, and this makes the Android environment fragile and unstable.  Note that if you are running Android Lollipop or Marshmellow, you will need to download and make as the default, the "Hacker's Keyboard", to use the GEMESYS Android apps now, as Google has altered Android system keyboard operation.  (See below...)

  - Fractal Examples on iPad using APLSE  - I show two recent images generated using APLSE running on the iPad. (Also down below...)

  - GNU gcc & Python 2.7 - How to Compile & Run C programs natively, and install Python  - Application development for tablets typically involves IDE's and a bunch of stuff to fabricate vendor-locked packages.  With a *jailbroken* iPad, you can load GNU gcc onto it, and develop and run C programs right on the device. The underlying iOS is very unix/linux like, and can be used effectively on its own, as a fully functional computer, once tools are made available.  Python 2.7.3 can be installed also. (First button, top line)

  - OpenWatcom Fortran-77 - How to run Fortran on an iPad - This is another DOSpad trick, where OpenWatcom Fortran77 is shown configured and running on the iPad. 

  - How to Put VLC on iPad-1 - Apple will not let you access the older versions of applications from their iTunes/iStore.  They want you to buy a new device - each year, it seems.  But if you jailbreak your iPad, you can get the .IPA file from the VLC archive, and install it with Install0us.  VLC is fully open-source, and will let you watch downloaded .FLV (Flash Video) files.  VLC 2.1.3 for iPad-1, running iOS 5.1.1 is Taro-approved.

  -  Pictures from Space - I have a research-interest in Chaos Theory, and fractal geometry, turblent flow, and so on, with specific reference to the Capital Markets.  Images from  space show astonishing variety of fractal examples.  The recent Juno probe shows amazing images of the turbulent flow of the atmosphere of Jupiter. (Second button, top line). The ISS also shows wonderful space-views of our home-world.

   -  Economics and the Stock Market.  (What I studied (officially) when I was at school).  And since we pay the bills as much by our investment results, as by our consulting efforts, the markets remain a constant and critical focus.  I will try to note some useful observations here. (Third button, top-line)

  -  Statistics & The Null-Hypothesis.  A very great deal of what is written about statistical methods, and the mathematics of data-science oriented research, is either incoherent or incomprehensible.  I ran across this well-written note, and before it is vandalized by professional statisticians who seek to raise the barriers to entry to their dark-arts, I thought it should be preserved.  I will try to add some clear examples of actual research.  I used to use SPSS, SAS and R.  Awful stuff, but data analysis can yield wonderful dividends, if it is done right, and you understand *exactly* what you are doing.  (Button 4, top-line)

  -  Hausdorff (Fractal) Dimension Examples and Explanations - lifted from other websites (which may change).  The examples and explanations are good, and I wanted to preserve them. (More button / top line)

  -  Images and notes on Time Travel (Why not?  It's my site!) {#smileys123.tonqueout}And who does not love the idea of Time Travel?   We are all time travellers, aren't we?  The past offers us insight, and the future, opportunity.  But what will the future hold - pleasant dreams or our worst nightmares?    (More button / top line)

Any comments or questions can be addressed to gemesyscanada < a t > gmail dot com.  (I spell out the email address here to limit the spam robots from mail-bombing me.  I trust you can understand the syntax.)

  -  TensorFlow/Xerion Neural-Network Development.  This is my latest thing, and I hope to use this new (old) technology to pull together a number of threads, and get to a better method.  If Thaler's work is right (based on Kahneman and Tversky), my weakness and deep loss-aversion will just keep me from taking action, when it is needed most.   It appears one must effectively automate all investment activity, if one is to have any chance nowadays.  The low-return world demands it, as do the AI/algorithmic-driven modern markets.  One cannot fight the world - one must dance with it. Wink Note - I started out planning to use TensorFlow primarily, but I could not get it to run on my Linux boxes.  I finally got it running on my MacBook, but I found I was also able to get Xerion running on my modern Linux machines.  Xerion is the Univ. of Toronto product Dr. Hinton's team developed in the late 1990's.  It is written in C and Tck/Tk, and it is also complex, but I know it well.   I had originally run Xerion under Slackware Linux, in 1995-8, and had built neural-nets to forecast commodity markets.  At first, compiling Xerion under gcc generated a blizzard of errors. But I made a number of minor changes, and used a 2008 gcc 4.3.0 version (with some custom-hacked stuff to address gcc 1990's-isms), and also downgraded Tcl/Tk from 8.5 to 7.6.   The running Xerion (with examples shown) runs on Fedora Linux boxes, and works surprisingly well.  Much better than I expected, actually.  I re-ran some of my old stuff from the 1990's (the D-Mark forecaster) as a regression-test, and confirmed I could generate exactly the same results, right down to the GNUplot graphics, viewed using Ghostview (I'm using GPL Ghostscript 8.6.3), as GNUplot will generate both .jpg and postscript output files).  I hope to transition some work to TensorFlow, soon.  But the Xerion stuff - using this signed boolean jump-delta idea, seems also to work *much* better than I expected.  It is actually kind of exciting, truth be told.  I have this sAPL workspace, "neuralxr", running on the iPad,  which I think I can extend to basically run (ie. "activate") the Xerion-trained "MarketNet", for an experimental series I have been focusing on for years.  If you look carefully, you can see the target is CM.  I use CM because it has unique, serial-autocorrelation characteristics - like a junk-bond, actually.  If you think of equity as basically a 100-year bond, then this stuff, with its curiously high yield, is basically just a long-duration not-really-but-trades-like-it high-risk, high-yield bond.   I have no formal connection with CM of any kind, except a small LOC on my farm (full disclosure) from them, which is undrawn.  Another property that makes CM unique among Cdn banks, is its historical commercial roots.  They are risk-taking real bankers, who get out and make loans.  It's a risky business, but it is also very profitable.  And I have an old high-school buddy (more full disclosure) who runs a major regulatory organization that manages the macro-prudential systemic risk monitoring of Cdn SIFUs, and I am confident his guys are doing their jobs.  But let me stress, I have no special knowledge, beyond what I read in the papers, and on the wire services.   Banking is just one of those wild-good business models.  As long as you don't blow-up, given the modern world (buckets and buckets and buckets of fiat money created everywhere, all the time, by just about everybody - and without any recourse required to turn it into gold or latium-bars or anything but computer bits), banking only really has the system-risk of hyper-inflation that it has to deal with.   In a world awash with fiat-cash, even if you make too many bad loans, as long as you ensure adequate collateral (Canada has a long tradition of 75-25, for example - banks won't loan more than 75% of value without CMHC or someone else taking the hit if the loan sours), then worst-case, you stop making money for a while.  For example, on my farm, which is worth 7 figures maybe?, and has *no* mortgage at all, the LOC is only 5 figures, and is not even drawn.  In the part of the Province where I live, this is typical.  Farms around here often sell for cash, or with financing arranged by family connections.  Yes, the large commercial loans banks make can go south, and then you have to set aside reserves.  But the capital requirements are tough and fiercely enforced here.  As we drive towards the future, Canada looks more like the Switzerland of the Americas, rather than the "Argentina of the North" some used to term it.   I also target CM in my NN example because it is a good trader's stock - lots of action, whether you like it or not.  The jump-delta table wants to be full of lots of -1's and +1's, not just a bunch of zeros, right?  So it is obvious then, that you want to train to a target that demostrates beta greater than one, and has a Hurst exponent that does not converge on 0.50 over time.

Neural-Net run on iPad using sAPL

I have hacked and "jailbroken" my iPad Gen-1, and have loaded sAPL on it.  This was the APL product I originally released on the Blackberry Playbook, and remains available for Android devices, from the Google PlayStore. (A Windows Cmd-shell and/or DOSbox version of sAPL is available from the GEMESYS Github account, as a .zip file.)   sAPL is a P/C version of the original IP Sharp APL mainframe product, which ran on IBM 370's, and Amdahl V8's.  This iPad version, running under DOSpad, provides a workspace just over 300K.  It is a small, but reliable, implementation of a full APL.

See the section: "APL on iPad" for details on what had to be done to put APL on the iPad.

I've built a small sAPL workspace, as a proof-of-concept, that accepts the weights, bias values, and structure of a trivial Xor2 (boolean exclusive-or) neural network, trained using Xerion, which can be activated (ie. run), on the iPad.  This has potential applications, as it would allow a complex network to be trained on a research machine, and then the network's weights and structure can be transfered to the iPad so that evolving, real-time scenarios can be entered on the fly, by someone who wants to query what the trained network's "thinks" of a possible data-scenario.  It's a simple approach, but might be useful.  An example of the simple Xor2 network being activated is shown to the right.

GEMESYS Apps for Android - on the Google Play Store:

gDOSbox has over 50,000 downloads on Google Play Store

The following GEMESYS Android Apps are available on the Google Play Store:

gDOSbox  -  This is a full-featured implementation of the DOSbox-0.74 open-source DOS emulator for Android.  It was developed for Android version 4 (KitKat series), and was recently upgraded to work on Android 5 series (and above) devices.  Recent changes by Google to their keyboard have caused issues on some devices, so we strongly recommend the "Hacker's Keyboard", by Klaus Weidner. 

Download "Hacker's Keyboard" from the Google Play Store, then use the Settings icon, scroll to "Language and Input", and select/invoke the "Hacker's Keyboard".  Then, in the "Default Keyboard" option, choose the "Hacker's Keyboard" as your Default Keyboard.  The Google keyboard attempts to hijack *all* user input, and damages the gDOSbox interface routines.

gDOSbox is a full DOS implementation, with corrected math routines, which allows DOS .exe files to be run on an Android tablet. 

GNUplot37 - A version of the GNUplot graph generation tool.  Allows data to be quickly plotted in two and three dimensions, as well as supporting math processing, and curve-fitting to data, and displaying the result.  Try it with:  "plot sin(x)" to see a sign wave.  Then load the demo (hundreds of examples) with "load 'all.dem' ".   To clear the screen, (if using an on-screen keyboard), use "!cls", and use "!dir /p" to review all the GNUplot examples available.

sAPL      -    The original IP Sharp 32-bit APL, which runs in an emulated IBM 360/75 environment as series of .exe files, orginally released to run on IBM P/C's, and them made into a freeware product by IP Sharp, to encourgage APL usage education. APL characters are generated by ALT-key (eg. ALT-L creates the APL quad character, ALT-[ creates the assignment operator, etc.), so the Hacker's Keyboard is required.

APLSE    -   The STSC APL freeware product, directly downloadable from the PlayStore.  (You do not need to install gDOSbox, it is loaded first).  This is an excellent small-footprint APL, which has full graphics support.  It is reliable, and was released as a freeware product to encourage and assist APL education.  Like sAPL, the APL characters are created using ALT sequences, so ALT-[, for example, is the assignment operator.  The "Hacker's Keyboard" is required.

TryAPL2  -   The IBM full featured "TryAPL2" product, which allows a subset of early APL2 to be run on a P/C.  This is a working APL, which includes IBM's variant of the enclosed-array extensions.  APL characters are generated with shift-letter sequences, so gKeyboard can be used with this APL.

WatAPL  -    The original Watcom APL, circa early 1980's.   This was recovered of of an original Watcom APL System floppy diskette, and dates from 1984.  It can be used with the gKeyboard, as the APL characters are generated with Shift-key sequences.

gKeyboard - A basic keyboard, with the APL characters shown on keytops.  Useful for TryAPL2 and WatAPL, and for learning the location of APL characters on the keyboard.

All GEMESYS software is freeware for educational purposes, and contains *no* advertising or in-app usage monitoring or tracking.

The seven GEMESYS apps for Android. No *root* access is required to run any of them!

Examples - iPad/Samsung Tab-A as "AI-Helper" platform, Xerion Xor2 Example, TensorFlow Linear Regression Example

The freeware APL,  APLSE, can be run on the iPad, using appropriate emulation. As an example,  I calculate and generate a graphic of the Logistic Equation phase-space, as a fractal example.  For those who study or work with fractals and Chaos Theory, the "Tent Map" is well known.  That was my first example.

I also have GNUplot37 running on the iPad and it is available from the Google Playstore as an Android app (no adverts, no in-app monitoring, no scams) , and it can be used to visualize a variety of numeric datasets.  Three examples are shown below (all running on my customized, jailbroken iPad, which once jailbroken, functions as an effective Linux/Unix tablet computer.)

The electrostatic field display (see the hundreds of tiny, pointing vectors?), is an example from the GNUplot37 demo programs.  It takes about 12 minutes to run on the iPad, but the information it conveys is impressive.

As straightforward economic series - London Gold price 10:30 AM fixing, daily-data, from 1965 to 2016 is shown.  If you look at long duration, accurate price-series, you can see the mechanism of market dynamics fairly clearly.    The boom-bust sequences in the spot gold market are typical of *all* financial markets.  That is why the attempts by American and European legislators to over-protect the financial system are deeply misguided.  Markets *require* the freedom to bankrupt foolish people who mindlessly follow trends, and enrich those who deploy risk-capital in places where real risk is present.  Risk needs to be recognized as a very real part of how markets do their job.  Remove risk, and you remove the effective, allocative intelligence of market behaviour.  Political people are often quite unable to grasp this simple truth.  Prices have to *move* and sometimes, move *a lot*, in order to do their job correctly.  Blaming markets for bad outcomes, is as unwise as blaming oxygen for causing a fire.

The last iPad display example shown is 3-d graphic showing a surface generated by a trigonometric function, again GNUPlot37 on my hacked iPad. 

My Vision for the AI-Augmenter (or AI-Helper..)

My vision for the AI-Augmenter (or AI-Helper), involves having a series of well-trained neural networks on a tablet device, and being able to interrogate them with current data, and get an "opinion" from them - and possibly display this amalgam of the AI's opinion in a graphic format that a human is comfortable interpreting - perhaps like a cross between the electrostatic field display (a bunch of little pointing vectors), and the 3-d surface, shown in the last example.

Examples of Xerion running the simple Xor2 network on a Linux development box are shown, as is an example of TensorFlow (the Google AI toolset, recently open-sourced), running on my MacBook Pro.  I find working on the MacBook Pro annoying and irritating, and just getting Python to successfully load and access all the libraries needed to run TensorFlow was more work than porting Xerion to a modern Linux, and getting a clean compile from the source.  I had to down-convert Tcl/Tk from 8.5 back to 7.6 and such, but that was not a huge hardship or difficult exercise.  The MacBook Pro hardware is very fine, but the Apple software is carefully designed to aggressively benefit Apple, regardless of the grief it causes independent developers.

Given that Microsoft was accused of being a "monopoly", and faced lawsuits for simply including a browser in its Windows O/S, I remain astonished by the extensive, and unchallenged use of monopolistic strategies that Apple gets away with.  They have restrictive dealer pricing, a "you-can-only-do-your-business-through-our-company store" policy that is a classic strategy of a monopolist, and they want additional cash-payments just to access development tools that are required to write computer programs that are to run on other Apple hardware.   In the 1970's, when IBM attached similar restrictions to their mainframe machines, they were successfully prosecuted by the US Justice Department for monopolistic, anti-competitive behaviour.   I like Apple hardware (which is built off-shore), but the code inside iOS that initiates the "Killed: 9" response when I attempt to run a gcc-compiled C-program, seems more like a monopolist's strategy, than it does a legitimate attempt to protect the system integrity. (See the "GNU gcc & Lynx" section, top line of this site to see what I am referring to.)

Very recently,  Google has annouced it will offer (as open-source), something called "TensorFlow-Lite", which will allow a subset of TensorFlow to operate on a tablet.  This is a very wise idea, and typical of the cleverness the Google folks demonstrate.  The most effective place for an AI tool is right in the hands of the client.

And this is key:  It has to be *unique*.   If AI is to have any benefit for me - especially in a complex, dangerous, tactical situation - it will have to offer something unique that only I have - it must offer me an *edge* of some sort. It need only be a tiny edge (as most are), but it will be the capacity of AI-Helper tools to offer that custom edge, that will make them quickly indespensible.  Once your "AI-Helper" gets understood to be offering you a real, actionable advantage - it will quickly become essential - like a telephone was to a stock-broker of old, or an automatic assault rifle is to a soldier in the battlefield.

The three iPad images below, were made with GNUplot37, which runs on the jailbroken iPad, under the DOSbox port, called "DOSPad-Gsys". The old Ver. 1.0 iPad can be a fully-functional, and useful computer, once the Apple iOS restrictions are bypassed.  The field-lines display is particularly interesting, as it requires substantial floating-point math calculations to create.

London 10:30 AM Spot Gold Price - 1965 to 2016, rendered on iPad, using GNUPlot37, running under DOSpad Gsys.

Example of Surface Plot - 3-D, using GNUplot37 - with contours, and accurate math processing.

Left side is Xerion on Linux, right side is Actnet function in sAPL on iPad, with same network weights. Example training cases produce same network output, both platforms.

Here is the Probability Calculator running on the jailbroken iPad. This shows an estimated probability density function for a possible trade with a 20-day duration. The underlying market database can be migrated to the iPad from the desktop box via secure copy (Linux utility "scp", given that one has Cygwin tools to support "ssh" (secure shell) on the Window's box that maintains the data. ). The idea, of course, is to have a series of neural networks watching all the data in something close to real time, and migrating information like this to the tablet, where visualization can be used to sanity-check the network's recommendations, before pulling the trigger on any given trade.

Wine-2.0.1.tar.xz checksums (MD5 and Sha256). I've just downloaded the stable Wine 2.0.1 code, and have now migrated my Time Series Manager to Linux - currently Fedora and CentOS.

Windows .EXE's for TSM and Gnuplot, running on CentOS 6.6 (Linux kernel 2.6.32-504.el6.i686), using a Pentium 4 (2.40 GHz) cpu, with only 2.0 GiB memory. I built this box just as an experiment (old 32-bit processor), but it runs so well, I can run a WEBrick Rails web-server as well as the old analytic stuff, and it is still snappy quick.