Neural-Network-AI Experimental Results: Developed portable Xerion + TSM + Lynx(ssl-enabled) + GNUplot platform on Linux (Fedora/Redhat) laptop platform, (ACER with Intel Centrino). This Linux laptop (Gnome Desktop) also runs current Firefox (modern gtk+2, glib, gdk, etc.). Wine - Windows emulator on Linux - is used to support a runtime-version of TSM, the Time Series data manager, which transforms raw price data into training cases for the Xerion-configured neural network (NN). For the current NN-driven AI under test, the training is sourced with boolean impulse-data from various daily market prices for tradable securities and commodities, for an 18 year period. The resulting neural-network can be evaluated for current datasets (ie. the last couple of weeks) on either this platform, or using an iPad or Android tablet.

[June 28, 2017] - New image, with Probability Calculator, Time Series Manager, (with linked GNUplot graphics), Xerion NN-AI (cmd-line mode runs GNUplot display, Xerion gui showing Hinton Diagrams of network unit values for most recent data case). The "plotValues" tcl/tk prgm shows boolean training target, and output of network boolean prediction in bottom, centre chart). All is integrated using the Fedora/RedHat platform, running on the dedicated AI box, an Intel 32-bit uniprocessor. Linux utilities "DOSemu" and "Wine - WINdows Emulator [or "Wine Is Not and Emulator"], used to support Probability Calculator app, WINE used to run Time Series Manager. Xerion was compiled from UTS source, with various minor modifications to support modern (sort of) Linux kernel (Fedora/RedHat Kernel #1 SMP. (Kernel is "old" now, but has a few custom bits compiled in)). Everything together at last, and running well. Results looking good - both technology, and market tone. Note that I modified the GNUplot display of "Actual" vs "Network Forecast" to show the predicted boolean output on the top (green line), with the actual training target on lower line. This makes it easier to see most-recent predicted network value, which can be expected to drive one's tactical market efforts. FD: I remain fully invested, long.

June 26th, 2017 run of MarketNet. I had a bug in the MAKECASE program that prevented me making use of last record of data - the most recent observation. Fixed the code this morning, re-ran all, and determined network generated *two* negative boolean outputs in a row => stronger negative signal (ie. implies > 1% downshift in price). This morning, target security was 106 to 106.58 in first few minutes of trading, but by noon, price had decayed over 1%, to 104.98 level. The results here are generated entirely with data up to Friday, June 23rd 2017, only. (ie. no "post-diction" phenomenon happening here). Note: because of this being ex-dividend day, the price of 106.11 implies no change. The network correctly forecast the >1% drop from 106.10 level (not just from the Friday close price of 107.38). [FD. I remain long, but slightly regretful :) ] We are not playing "chess" here. This is a real, open-domain process, and these are real numbers. We really have no idea what the underlying price distributions look like. The results are so impressive, that I wanted to post them immediately, as the -.5330 and then -0.99996 is a clear, unambiguous boolean toggle (as per cyan-screen showing Xerion network output, and the green-line, on the GNUplot image).

Here is image of tanh (hyperbolic tangent) function from Gnuplot37, overlaid with hypertanf sAPL function from "neuralxr" workspace. This sAPL workspace will accept the MNnet4~1.WTT file of Xerion weights for the MarketNet network, and use dot-product to vector multiply the weights to "activate" the Xerion-trained network. This will let me "run" the network, on the iPad. I wrote the function to load the Xerion weights file into sAPL, (format: wt <- readfile fname) and second function to convert the text into numeric (format: wnet <- procwt wt). Currently, wnet is just a high-precision vector of 1281 32-bit floats. Since I'm using hyperbolic tangent instead of logistic as my transfer function, I needed to write this tiny transfer function. The tanh function already exists in GNUplot37. You can start GNUplot, and just enter "plot tanh(x)" and see this S-curve, which is the mechanism by which machine-intelligence is stored in a neural-network. Getting closer for an NN-based iPad-runable Augmenter. [Update: I wrote the function on top-left, but then remembered the APL built-in trig. functions, and yes, "7oX" gives hyperbolic tangent for X. The "o" operator is "ALT-o", and when used dyadic (two arguments), it gives access to all the trig. functions. With full precision of 18 digits enabled, the built-in "tanh" function gives slightly more precise results.]

This screen shot from the Linux AI-box is a quick way to post results - not sophisticated, but clear. Speaking of "quick", I used the "quickProp" method here, which models derivatives as independent quadratics. The method tries to jump to the projected minimum of each quadratic. This is one of the minimization methods in Xerion, and it has worked well on my signed boolean data. (See: S. Fahlman "An Empirical Study of Learning Speed in Back-Propagation Networks", 1988, CMU-CS-88-162, Carnegie-Mellon University.) Typically this method uses fixed steps with epsilon of 1, but I used a line-search here. The error value (f:) is driven down below 300, with a gradient vector length of less than 6. From the plotValues.tcl chart, one can see it improves on the previous result. If this network is this good on a different dataset outside the training example, then we might just have something here. I want to thank Dr. Hinton and everyone at U of Toronto for making Xerion available.

Running Xerion with gui, running backpropagation using conjugate gradient and line-search, with new network with twice the nodes. Error level (F:) down below previous 20 node network in less than 400 evaluations. Looks promising...
[Initial Results: - MarketNet was built using signed boolean jump coding. Note that for the graphic (Postscript output, shown using GhostView), I tweaked my plotValues.tcl displayer to shift the actual data +3 up, so it does not obscure the network output forecast. The network is called "MarketNet", and is not fully trained, as I need to reset the "tcl_precision" value to 17 (from its default of 6). With improved precision, the network trains further, and should become more accurate. What one needs to do, is save the weights, and then try the network on a dataset built for a different time period. This will provide indication of whether I am just training to noise or not.]

Network Evaluation Results - May 18 to July 21, 2017. The results show the network cannot accurately forecast the 4-day-forward boolean impulse value. Co-efficient of Accuracy is 24% - less than 1/3rd, so actually worse than random. This indicates that there is not sufficient information in the dataset (boolean impulse data for 5 days back, across 6 different price series: - SPX, DJIA, BCE, SpotGold( 3pm London fix in US$), Spot_Oil (WTI Cushing Hub US$/bbl) and CM) to make a useful forecast. I had expected results might at least be close to 40 - maybe even 45%, but such is not the case. One can make money trading securities - but forecasting future price levels - even if the data is boolean classified as just higher, same or lower, is not possible here. More data, across a longer time period may improve the network's ability to predict. But this evaluation currently shows the NN-AI has no ability to make accurate predictions of future market direction for the target security.

Field Notes from Lorcalon Farm

APL on iPad & TensorFlow, Xerion & the Helper-AI's

GEMESYS Ltd. is the name of my consulting practice.  We do research and analysis in science and technology, with a view to learning, teaching, and helping.  And we look for special economic situations that work.   GEMESYS Ltd was established in 1981, and continues to offer research and consulting services to address unique requirements.  We operate from Lorcalon Farm, in Canada.  (The image at right was made using the laplace partial differentiation simulation example from Google's TensorFlow tutorials. )

Why Do Datascience? & Why use AI?

Since the 1990's, I've done data-science related work under the radar, as it were.  I've even built amplifiers and radios to learn about feedback processes.  (Building and tuning an actual, physical device teaches one so much.  The math of it gets into your fingertips...)   I read George Soro's stuff on "reflexivity" in the markets (circa 1980's), and I think I am beginning to understand why "technical analysis" actually works.  We used to think it was because it captured the behavioural economic features of humans (cf. Amos Tversky, Daniel Kahneman, Richard Thaler et al), but now I think there is more there.  If you need to make money using the markets (ie. to pay your bills), you either go broke, or you end up using some form of technical analysis (or, you become a portfolio manager, take a percentage of the assets, and you don't care what happens, as long as you can keep your clients.)  But now, there is hard-core datascience, which lets many different ideas to be looked at all the time.  Having a good AI helper, with statistically significant results associated with its predictions, I suspect can give one an edge, even if much of the data one encounters is mostly wild randomness.   As a lone-wolf in private practice, you either have a verified edge, or you are quickly carried out, and fall into the abyss.  And it seems AI can give you an edge.  [Mar. 31, 2017.  Well, I guess it's confirmed:  US-based Blackrock, one of the biggest investment funds on the planet now, with $5.1 trillion in assets, has announced that it will sack a bunch of its human stock-pickers, and replace them with *robots* - the term Wall Street uses for AI-driven investment strategies.  Source: Wall Street journal article, Mar. 28, 2017.] 

Blush   As time goes by and markets change, I just keep getting more evidence of how any *model* is going to be successfully gamed by the market.  You don't want a model, you want an old, experienced guy to offer some gentle advice.  Since there is no such guy - a *very* well trained AI might be the next best thing, perhaps?]  

Status Log (TensorFlow/Xerion work):

[July 25, 2017] - Picture above shows Network Evaluation results for May 18 to July 21 period.  The neural-network cannot predict with any useful accuracy - results are basically slightly worse than random.  The little Tcl/Tk evaluation program is provided in the "Code" section.  I believe the NN-AI approach is useful and effective, and what it has shown here, is that there is not sufficient information in the data to forecast even the direction of change 4 days hence.  This actually confirms what myself and others discovered in a project done for a Government Treasury operation, back in the 1980's.   Reviews and analytic efforts directed at current data-series are of no value in predicting near-future price levels in an active marketplace, and it is not even possible to catch turning-points or the direction of future changes.  It seems it is only by possessing specific, market-moving information, ahead of other market participants, that any "edge" can be obtained.  (Of course, if you can see the order flow come in, and act before these orders hit the market, that is essentially the same thing as acting with prior-knowledge.)  What is interesting, is that the "null forecast" (ie. "It will be tomorrow, what it has been today"), always beats any active attempt to forecast.  I thought this might be different here, but for now, no joy.

Also, doing a crash course on Apache Spark (with side-detours into Scala and Hadoop).  Can't believe this stuff. Worse than TensorFlow - which looks great, but is runnable only after downloading and installing terabytes of related Java, Python and other such material.  Looked at some OpenText stuff which uses Apache Spark.  The code-bloat here is just off the scale.  Dig deep into the stuff, and you get down to JDK, SQL and R like everything else.  This is the same gunk that hasn't changed in years.   I've been considering calling this a wrap, shutting down the website, and going back to just making money by some really traditional methods that have always worked for me.  AI and machine learning seem to have a dangerously high bullshyte component (to use Neil S.'s great word from Anathem).  I know AI can work, but it's all about recognition, and hammering away with machine-clusters on great "data-lakes" of unstructured material, is not going to make anyone but the regulators and the software merchants, any money.  OpenText seems to have the right idea, in that they use Spark to sift thru gobs of crap-data that companies create that can leak PII (personally identifiable information) out into the public space (think SIN's, credit-card #'s, etc.), and help stop this leakage so the company does not get f**ked over by new European data privacy laws.   But a lot of the other AI promises look to be nonsense.  The data has to be structured (and clean!) if it is to be of any use.  (That's why AI only really works in games, where reality can be tightly bounded so that Talib's "ludic fallacy" can be realized).  But what I have learned from this project is that I need a lot more data, before I have any real chance to make accurate forecasts.   And I've also realized that I can code (into booleans) a whole lot more than just price-changes.  If you believe in "efficient markets", then everything should already be in the price - and so price change should be enough to get a good handle on the future.  But all the research shows that markets are not even close to efficient - and it is in the nature of the inefficiences that the money resides.  BMO just completed a 4 million share repurchase effort, with shares repurchased for cancellation.   Nice move.  Be a nice trade to step in front of, no?  But I only hear about it by reading the newswires after it has been completed.  "Information" is not homogenous - most info is useless blather and flatulent noise - but some is, or can be, critically useful.   Get that data, code it up as boolean strings, feed it to the NN for training, and your AI might be able to become smart enough to make a difference to your results. 

I posted the tiny Tck/Tk program into the "Code" section, that is used to evaluate the boolean table generated by the network.  (Creates the evaluation table shown above).  It calculates a simple "co-efficient of accuracy" by just counting evaluation training cases where the network got the forecast correct.  It counts anything less than or equal to absolute value of .8 as a zero. (The network has to provide an output value less or greater than .8 to have it counted as a minus one or plus one.  The three possible target values are -1, 0 or +1, so any result in the -.8 to +.8 range gets counted as a zero.  The co-efficient of accuracy is running around 23 to 27 percent, so I conclude the network is just not able to forecast at all.  What is interesting, is that if I forget to load the weights, and run the evaluation on a random network (where the network node weights are all just random values), the co-efficient of accuracy jumps to around 65% typically, as most of the forecast target values are zero, and most of the random-produced output values fall within the -.8 to +.8 "evaluate as a zero" range.  I think this is absolutely hilarious.  My trained AI only gets it right 1/4 of the time - but the "null forecast" (ie. nothing really changes - or any change is less than 1%) is evident about 2/3rd of the time!  This result jives exactly with previous research I did for a government department years ago.  We found the "null forecast" (ie. it will be in the future, what it is now), *always* beat any forecast provided by professional forecasters and economic soothsayers.  This is actually pretty interesting.    I have a suspicion that there might be something actionable here using Baysian probabilities, if I could just improve the network forecast to getting it right 40 to 45% of the time - still less than half, which would seem to provide no edge at all.  But if you know that 2/3rd's of the time, there will be no significant change, when you do get an indication of an expected price jump, if the costs and payoffs of the bet are sufficiently asymetric, it still might work over time to make a bit of money. 

[July 24, 2017] - I updated the data, generated the boolean-impulse casefile, ran the neural-net model, and looked at the forecast vs actuals for last couple of weeks.  The network just does not predict well. What does work, looks to be serial-autocorrelation strategies.  The target trends quite strongly.

[July 20, 2017] - Interestingly, the inability of the trained network to make accurate predictions for the 5-day ahead point in time is almost certainly due to the fact that there is not sufficient information in each training case, to make such a prediction. In other words, even the direction of change in the near future, cannot be known with any accuracy.  This is useful information.  It tells us that if we are to have investment success, we must target time-frames where we can effectively use current information to advantage.  That may well mean time-frames of hours and minutes, (we know that works) or months and years (Graham and Dodd show how that can work, too).  Throughout this exercise, I remained in a long position on the target security, which is now today trading at 109.47/shr.  Much of the percentage price improvement (which was not caught by the network), is the result of a recognition that a confluence of factors is at work - the improving position of the commodity-driven Cdn dollar (oil and gold cannot and will not stay cheap forever),  the slow but inevitable rise in the general level of interest rates (the recent quarter-point increase in the Bank of Canada rate is certainly only the beginning of the process of rate normalization), and the various fundamental indications that the valuation of our target security, (against baseline financial ratio metrics, as well as its peers), remains attractive.  With an attractive payout ratio above 40%, and a dividend rate that remains close to 5%, and a long historical dataset of dividend consistency, it is not difficult for an objective analyst to put a target price of $140 to $150 per share (Cdn$) on the target.  

I have a better understanding of why the "robot" selected portfolios are so attractive now to investment professionals.  In the same way that neural-networks can always "see" the image if it is in fact see-able by humans, it is probably true this technology - when applied to datasets that actually contain sufficient information to make an effective selection, and over a sufficient time-frame, will achieve accurate recognition, and make profitable choices. 

What this means in practice, is that I need to lengthen the time-frame significantly, and broaden the scope of the data to include fundamental information on market tone and target financial characteristics.  The 5-day time range is basically all noise - if you train to noise, you cannot get anything meaningful as a prediction.  But if you look out several years, and train your network to select for characteristics that are known (and must) have significant effect on ultimate target price, then you will almost certainly substantially enhance the network's effectiveness.  I am also pretty sure you can *shrink* the time-frame down to minutes, and basically have the network trade the order flow - and profit from essentially scalping the bid and offer range.  This is how the old floor traders made their livings - a few ticks on each trade, based on their reading of the marketplace.  Many reports suggest, for example, that just the volume-level of shouting in the room was a useful and actionable indicator.  One needs very high-speed tick-by-tick data, and the ability to execute rapidly, to even begin to test these sorts of network-driven strategies (the so-called "flash" trading models), and there is a lot of evidence that *many* groups are already doing this effectively.

From this work, I am now of the opinion that it is only by trading over multi-year time-frames, that the average, non-professional investor can significantly profit in modern securities markets.   The very-short-term remains the domain of the very well funded professionals, who have access to substantial capital and advanced-technology linkages, while the multi-year timeframe provides the non-professional investor real opportunity for investment success - if investment selections are made wisely, and monitored carefully.  The intermediate ranges - weeks to months - seem to be characterized by what I term "reactive noise", where occasionlly statistical arbitrage is possible, but catching the weeks-to-months intermediate market swings remains a process which is characterized by a high "noise" component, which makes predictability difficult.    What this means in practice is that if you are swing-trading and trying to catch local ups and downs, you are unlikely to make money over time, and in fact, run a high risk of being knocked out of an attractive position, at the worst possible time. 

Bottom line: The neural-network generated several sell-signals, but I elected to remain fully-invested (for reasons indicated above), as the target security advanced from the 104+ level to the 109+ level where it trades today.  On a 2300 share position, this $5/shr move has generated an $11,500 gain, over the evaluation period covered by the experiment.  Should the valuation of the target security move closer to it's peers (particularly it's US-based peers), then substantial price improvement would seem to be possible.  Given that the company in question has made a signifincant US-acquisition, it is not unreasonable that the market may, over time, assign a valuation to the target that aligns closer to its US peers.

[July 15, 2017] - The current experiment to use boolean delta-jumps as a predictive strategy has not yielded a particularly effective forecasting tool, but it does allow one to characterize the market, based on a particular picture-of-the-world that has prevailed, and as such, it provides a formal instrumentation of the current market situation. The formalism and methodology are sound, and an enhanced dataset (more than just 6 data series) can be expected to yield better, more fine-grained results. What I've done here is to develop a working proof-of-concept neural-network based AI product, which can provide market characterization, base on choices that can be made by each client. It's possible for a taylored, custom AI product to be quickly designed and implemented, specific to the views of a single client, which would incorporate a client-specific data selection. As we know, the major investors in New York are already doing this, and I believe the opportunity now exists for smaller firms and individuals to deploy AI methods. I suspect this may even up things for investors, and that a more level field will make a fairer and more effective market for everyone.

[July 12, 2017] - Formal evaluation of results:  Two networks were trained on 4361 cases, where each case was a 30 element signed boolean vector, derived by looking at price jumps of several different securities and commodities, training to a price jump in a target security 5 days hence.  Nets are V2 and V5.  On training data (the 4361 observations),  Net_V2 got 3893 out of 4361 cases correct (= .892685), Net_V5 got 3849/4361 => coefficient of accuracy of 0,882596.  Net_v2 seems to be the best network so far.   (Coefficient of accuracy on training data: 89.3% versus 88.3% for net_v5).  On the evaluation cases, so far, the networks are not performing well.  Their results appear to be worse than what could be expected from randomness.  On the data from May 11 to July 11, Net_v2 is posting 9/34 accurate forecasts, and Net_v5 is posting 7/34 accurate forecasts.  (Coefficients of accuracy: 0.2647 for Net_v2, versus 0.2058 for Net_v5).  I suspect the issue is that the boolean price jump data being used to train the networks does not contain sufficient information to know what the target price jump will be in week,  If a linkage could be established, I suspect the network training would have found such a relationship.  But what these results suggest is that knowing the price jump history for several days back, and across several different price series, is not sufficient to predict a future price jump - even if that future is only 5 days hence.   It suggests we need more data, across a greater number of independent components, if we are to have a better than even chance of predicting future price jumps.  A nice methodology, but no "edge" for now.

[July 8-9, 2017] - rebuilt Xerion completely on a Linux laptop from source, kept notes this time. The original build was experimental, and back in mid-March on AI box, and I did not keep notes.  You need the varargs.h file, and each of the "configure" files need to be fixed (they report syntax errors that suggest tcl.h and uts.h includes are not being found).  The fonts.dir file in the /usr/share/X11/fonts/100dpi directory has to be altered, so the "-adobe-courier-bold...70..."
 font can be found when the "bp-wish" program, in xerion/tkbp-4.1p2, is run (it brings up the Xerion gui screen).   You edit an existing reference of "-adobe-courier-bold .... 90..." to become: "-adobe-courier ...70 ..." (literally change the "90" to a "70").  This does not seem to impair any existing apps, and allows Xerion to pop up it's window, assuming you are running an X11 desktop of some kind.  Also, before trying to build Xerion components, you need to build and install tcl7.3 and its associated tk3.6, itcl extensions and tclX, all four of which are provided in the UofToronto's Xerion source tarballs.  Successfully built tcl/tk, and Xerion and its components and associated utilities on Fedora Linux laptop.   (See site top image which shows Xerion running on a Linux laptop & training against the initial boolean dataset.)

[July 7, 2017] - Last two days, retrained a different network on same 4361 observation dataset, where each day is a 30 element jump-delta vector from various market prices.  Interestingly, despite having the sumsquare error driven down to roughly same reported level (318 for V3 network, versus the 311 for the V2 network (from starting error level of typically around 3000),  Network V2 - the production version under active evaluation appears to do a better job.  The V3 network provides remarkably different results for the May 11 to July6th test range versus the V2 net.  The orginal V2 net seems to be much better in that it seems to be more accurate in its forecasts.  (V1 first network was not good - so original is V2, newer trained version is V3).  Montage of both results from "compareMarketNet" is shown in last screen display of "Code" section.

[July 6, 2017] - (Afternoon) Updated the "Code" section, to provide the tcl code that creates the example network I have been using, and loads the training cases, and shows how the iPad can be used to take the Xerion network weights and structure, and run boolean jump-delta vectors made from market price data, right on the iPad to get a go-nogo trading decision information.  This now provides a working prototype of a neural-network driven portable AI that can be built and trained using a large amount of real market data, but run on an iPad in real time, to provide immediate, actionable market decision suggestions. 

[July 6, 2017] - Experimenting with different back-prop methods (conjugate gradient, delta-bar-delta, momentum descent...) and different step-methods (fixed step with various epsilons, line search, slop search) to see which gives smallest error.  I'm interesting in interpreting the net's output as a gaussian that I can use for result-evaluation, and still not sure best way to do this.  The day-to-day results appear to be good enough to trade with, and it looks like this approach is offering a small, but viable edge.  This is key.    Oh, also a big tech result:  I downloaded newer versions of gtk, gdk, and glib, and compiled and built everything from source, on three older Linux platforms (two laptops and the AI box - need modern Firefox to find data...). Then, I downloaded the Firefox that is current for CentOS 6.6, which is Firefox-34.  Once you have Ffox v34 running, you can access modern JSON websites and such, and also jump to version 44, using the FIrefox upgrader.  Here is an interesting caveat.  Despite running a "./configure, make, make install" on the glib and gtk+2 sources, the rpm (package manager) was still reporting the old versions, and my binary-only version of Firefox34 would not load (error was: gtk_widget_set_can_focus symbol not found).  The solution was simple, and I stumbled upon it myself because I am stubborn, and confirmed that the gtk_widget stuff was being compiled.  Just nav to the dir where you ran the gtk compile, and run "ldconfig" to let Firefox find the libraries at runtime.     Modern Linux has dynamic libs (they load at run time), so even using yum to update "xulrunner" and "nprs" did not get the Firefox binaries running.
Oh, and this is critical: Compile your new glib first (I used an older glib ver. 2.26.1), and after the "make install" step, run "export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH" and then run your  "/sbin/ldconfig" to configure the dynamic linker runtime bindings for the graphics stuff, otherwise the ./configure step for the gtk+2 stuff will not run.  Can't remember where I found that, maybe on a StackOverflow post?  The default glib install puts the glib libs in /usr/local/lib (which is probably what you want to do, so as not to degrade your production gnome desktops..  Yes, this is a tad kludgy, but my stuff all just works.

[June 30, 2017] - Results for network run pre-market are another +1, so we now have three positive boolean one's in a row.  Market tone is weak and soft.  Target price was up, then down, in choppy action.  Network says to buy now, clear strong signal, three +1's in a row.   See the first image in "Economics-2017" to see output screen.  I also show the full Hinton Diagram as generated by Xerion display utility, for case June 29, 2017, the most recent observation.  The single white square on top in "Unit: Output" is the +1 boolean target value.  Minus one is a pure black square, and a near grey square is a zero.  Note the inputs and network output are signed booleans, ( "trinary data" ), but the internal network values can vary between -1 and +1, as shown by the Hinton Diagrams in the middle row of boxes in "Unit: Output".

[June 28, 2017] - Sourced the data, built boolean table, ran the network.  It toggled...negative to positive output, from -1.0 to .9996895.  So, that means price upshifts?  Price delta of target is already +0.82 of a dollar, by 9:39AM as I write this.  SInce I am only using retail-level trading software, I have zero chance of even putting on a position. Price will probably retrace.  But the methodology seems curiously solid, and with proper software, there might be opportunity here.   [1:30pm update]  Tweaked the AI box, and got everything working there, including Probablility Calculator and other modules, which can be driven from .PTB format data written by TSM.  Linkages between packages are file-based, but *everything* can now run on Linux.  Currently using older Fedora kernel, but CentOS 6.6 and 7.x look like they work ok with everything.  Wine compiled and installed fine on all machines, and TSM and MAKECASE run fine on my CentOS 6.6 testbed. Updated top picture, showing AI box running Xerion with Hinton Diagrams of unit values for June 27, 2017 datacase, TSM and a data-driven OHLC price-chart of target, cmd-line Xerion "bp->", running the GNUplot display of NN actual vs. predicted, and the Probability Calculator, (running in a DOSbox) which also provides risk-driven recommendation for position size.  The same module which runs the Prob. Calc, also calculates Hurst exponents, and a series of moving-average market characterizations and related graphic displays.  Having it all running on one platform makes data-management much easier and allows results to be obtained faster.

[June 26, 2017] - Determined I had a bug in last-date processing of raw price numbers into boolean data, and built a fix.  This gave me a proper result for Friday - one more data record on "tcasetab.txt" boolean exampleSet file.  The correct network output for most recent data (Friday, June 23rd)  was -0.999966, (Thursday was -0.533085) and I expected this only to flag the shift in price resulting from target going ex-dividend, but it seems the network did better, and caught a serious >1% downtick in market price, from the 106.11 level to 104.90 range, by noon on Monday, June 26th.  This could all just be randomness, of course I realize, but this technical approach is showing surprising, unambiguous ability to forecast future market price direction. 

[June 25, 2017] - Latest results, with data to June 23, 2017.   The network under test is a trivially small neural-network model, but the results are interesting.  See the "Economics-2017" section.  Target security goes ex-dividend tomorrow by 1.27, so we know open price will be 106.11, ceteris paribus, and will be down enough to trip a boolean in the jump-delta MAKECASE table.

[June 22, 2017] - The NN-AI (Neural-Network based Artificial Intelligence) device described here, looks as though it might be useful.  This page has become too long, so I put some of today's preliminary results in the "Economics-2017" section.

[June 19, 2017] - Put MAKECASEBOOL program inside of Linux version of my Time Series Manager (with it's nice Window's GUI), and built a bunch of little modal window routines to make it work as a full graphic user interface (GUI).  This makes it easy and quick to generate a boolean jump-delta table, and hand it over to Xerion, to have the network run against it.  Also, built (yet again!) an entire new data-sourcing subsystem, to merge/load yet another completely different data source into the TSM database, so that my time series data can stay current.   Information suppliers seem to change their formats every few months now.   The Lynx browser is used to pull data from various internet sources, while .csv format files can be downloaded from sources such as the St. Louis Federal Reserve.   Putting everything into a stable data management product (my custom-built TSM, in my case), and ensuring that the data is accurate, is the first, and most critical step in any data-driven research and analysis exercise.  Many research efforts and analytic toolsets use SQL variants to maintain time-indexed data, but SQL does not lend itself to maintaining and manipulating time-series data very well.  My TSM thing lets time-indexed tables and vectors be manipulated as single entities, and so lends itself to facilitating the training-dataset construction that NN-based machine learning requires as a starting point. Cool

[June 17, 2017] - Got Lynx Browser running - WITH OpenSSL (needed for "https:" pages, of course) - on Linux boxes *AND* natively, on the iPad (?! yes, really. Ran the configure and the make right on the iPad, and built a working, SSL-enabled Web-browser right on the tablet itself, no awful glop-bucket of Eclipse or Apple dev-goo with timebombs in it... Big Grin{#smileys123.tonqueout})  Lynx works pretty good, and will be useful (see the top line "GNUgcc & Lynx Browser" for details.  Also, re-wrote all the data-get routines in Time Series Manager, so I can get data again, and slotted in the stub for the boolean delta-jump table-builder, called MAKECASE.  Just need build into TSM some nice modal boxes to pick up the boolean table-build parms, and I have a cobbled-togther, hacketty-hacked (but real, and actionable!) prototype AI product. 

[June 15, 2017] - Massive powerfail at the lab.. A large tree fell on our powerline, snapped a power pole as the wires broke, and we went dark.  Within an hour, I had our electrics guy here with a new pole, and a utility crew removing+replacing the blown transformer by the garage.  (This actually happened on Monday, the 11th.)   All recovered within one day, but got me thinking about disasters.  Here is another:  Reading about Intel ME, and the "Silent Bob is Silent" exploit.  Bad stuff already in the wild, being used, it appears.  Two of my boxes have Intel ME, and today, I shutdown, and then re-booted a *powered off* machine, from the hacked iPad, via only WiFi access, just using Safari browser on the iPad logging into to port 16992 on a Win7 box.  The Intel AMT software is firmware, running on an ARC chip on the motherboard, and runs completely seperate from whatever software you have put as your O/S on the box.  Folks have disassembled Intel AMT, and the "Silent Bob" exploit lets you login and access the Intel-ME webserver even if your machine is plugged in, but powered off - and without entering the admin password.    The IntelME thing can be used as a packet-sniffer, and to access memory on the Intel box while it is running.  It's basically the "Clipper Chip", an ugly idea shot down over 20 years ago.   Read the tech details about it at this URL:   There are experimental "ME-Disable" routines, which flash part of the firmware with hexFF, but try to keep the BIOS BUP (boot-up) stuff:  but they can brick the Intel main-board, in some cases.  It's an ancient paradigm: we must take risks, to be safe... a lot like investment activity.   Training target swung from 104.70 to close at 106 even, as Cdn$ showed some firmness.  With ex-div approaching, and summer vactions being taken, things will likely get a bit twitchy.Blush Unsure 

[June 11-14, 2017] - Lots of volatility again.  But also, done some cool techie stuff.. Installed the latest stable version of "Wine" for Linux on the Linux boxes. "Wine" (WINdows Emulation) program suite lets Windows programs run on the Linux machines.  Big result, as MAKECASE is written for WIndows, as is the TimeSeries price database.  Both are now converted to run on Linux - along with WGNUplot, so my whole data-management app now can run on Linux.  Also downloaded the "openssl-devel" stuff, and rebuilt the Lynx programs on Fedora Linux and CentOS to use ssl (secure socket layer), so Lynx can run with "https:" access.  This was critical, as most financial datasites are ssl (ie. "https:") now, and Lynx is a text-mode browser than is used to pull in the data.  Seeing all my old code and graphics from Windows, run on the Linux boxes is quite surreal, as it all works well.  I used stable-release: "Wine" source from:     Note, the SHA256 and MD5 checksum hashes for the wine-2.0.1.tar.xz file are in photo near page bottom.  If you have Windows code you want to run on Linux, this is snappier than building a virtual Windows box, it looks like.  Image of my Time Series Manager (TSM) application with 12,362 row by 6 col. series (spot gold prices, 1968 to June 9, 2017), with linear chart and least-squares regresison line, via GNUplot at bottom.  The TSM product avoids spreadsheet stuff, and lets data series be manipulated easily and directly as single tensors.  Last image shows windows .EXE's, running directly on a CentOS 6.6 Linux kernel, using Wine 2.0.1

[June 9, 2017 - Friday] - The training target is up over 2% today, providing initial positive results to the real-time experiment this development project has become.  (Training target, and other financial equities are basically in a "run" mode.  Attempts to add to my existing position would now require bids roughly 3% above where initial +1 sequence began appearing, in the jump-delta current example set (the current May 11th to June 7th "TCASETAB.TXT" file of booleans, which the trained NN-AI has been interpreting).  It is quite possible that this favourable outcome is due to random chance.   (Training target is now up over 2.20%, just as I have typed this note.)   I suspect that the semi-strange market behaviour we now typically observe (ie. curiously uneven patterns of volatility - ie. no volatility for long stretches, followed by rapid spikes and retracements in volatility) , is due to the widespread use of AI and other algorithmic methods to augment trading and investment activity.  We may still be being fooled by randomness, but we are also much less randomly fooled, it would appear.  I have a strange sense that this modern market may exceed the excesses we observed during the 1920 to 1935 period.  If this is true, then DJIA at 35,000 to 40,000 range within 3 to 5 years, is not at all unreasonable or unlikely.  Rising rates will be associated with rising returns on capital, as is often observed in the historical record.  And the AI tools - as they augment ability - will also likely enhance the risk-preference profile of most participants.  The equity market may be the mechanism that puts more income into the wallets of consumers, so that consumption and investment can be given the demand-push that many folks think it needs to have.  What is curious, of course, is the low rate of indicated inflation.  But I think I know the reason for this also, and discussion of that phenomenon is well beyond the scope of this observational comment... Cool

[June 8, 2017] - Re-ran the NN-AI (Neural-Network AI) prgm with two more days data.  Enhanced the MAKECASE and MAKECASEBOOL utilities to allow the TCASETAB.TXT file of boolean jump-delta vectors to be more easily generated.  Mkt action suggests *many* other participants are already actively using AI methods.  What this suggests is that this methodology probably needs to be in everyone's toolbox.  Although a bit technically complex to pull together, if there is some predictive ability, it may be useful.  Certainly, the NN-AI approach is probably the best tool for trying to catch a turning point.  I recall a formal exercise, carried out by a Ministry of Treasury - in which I did the computer programming - failed to find *any* method that could successfully even indicate upcoming interest rate moves (and subsequent changes in bond prices).  We literally tried *all* known methods, and they all were ineffective at even catching major turns before they happened, much less actually predicting anything.    But this was before NN-AI based methods.  If the data is prepared properly, it appears there might some effectiveness to this NN-AI approach described here.  (FD:  I was not filled yesterday in my order.  Today, the target has advanced 0.60%, as I write this.  This approach looks promising.).   The first image is the screen-display for today's forecast: (Orange screen, top right, with boolean results for last 4 days, all +1.  As a real-time experiment, this suggests a long position in the target is indicated.)

[June 7, 2017] - Fixed a bug in MAKECASE program which was not handling the end-of-data construction correctly, simply branching away when it could not create the target (which is 5 days forward).  Fixed the program to provide valid training case data, with -999 as indicator that training target could not be constructed.  This lets me run the MAKECASE program for small subset of data (ie. the last 20 or 30 days), and produce correct boolean jump-delta case vectors right up to end of data, despite not having training target.  Obviously, this is needed in order to run "compareMarketNet" and see what the values the network generates, as these most recent values have the most useful, predictive power.  Network still says uptrend is predicted, as do previous curve-fitting prgms.  (Full disclosure: I put a small bid in just off the mkt, for a small increment to existing position..) [Update: 9:40 pm EDT: I did not get filled, which is traditionally a positive sign. Almost always, if I get filled in a stat-arb stink bid, I regret it.  Today, not being filled, suggests NN-model might be working.]

[June 6, 2017] - Top first image shows most recent results:  I spent time updating data to June 5 (previous day), and ran MAKECASE from mid-May to present, to generate current dataset to give to network.  Specifically, here is process to have neural network evaluate data: (start bp_wish, ie. Xerion)

How to Restart & Reload Weights & run a Xerion Network Against New Data...

BashShell > bp_wish          (start bp_wish [Xerion] from command line shell
> source     (this program just sets up the network, and
> set precision 17              defines the neural network structure, and loads
                                       the "tcasetab.txt" training case data into the
                                       variable MNTraining, and sets "exampleSet" variable
                                       to the string: "MNTraining" ...)
> set tcl_precision 17        (tcl_precision has to be set to avoid losing info )
> source compareMarketNet.tcl    (check results: "Actual vs Predicted" ability...)
> source plotValuesSML.tcl           (source the smaller "plotValues" tcl program)
> MNTraining size                         (check that exampleSet training loaded ok, 12 obs.)
> bp_groupType MarketNet.Output     (confirm nodes are correct configuration...)

> uts_presentExample MarketNet MNTraining 0         (present first example case)
> uts_activateNet MarketNet                                 (activate (ie. "run" the net))
> compareMarketNet                       (attempt to compare actual vs predicted. No good..)
                                                    (forgot to load the network Weights ! )
[ the results are random ]

> uts_loadWeights MarketNet MNnet40_v2.wtb    (load the highest-precision weights)
                                                                    (from binary format file...)
> compareMarketNet                       (this time, when we run this, we get sane results...)

[ the results as shown on the screens in first image ]

> plotValues MarketNet MNTraining    (creates Actual vs Predicted chart (see screen))
                                                     (Note that this "plotValues" is from the .tcl )
                                                     (program "plotValuesSML.tcl", sourced above... )
Hit Return to quit...

The results suggest uptrend in target value.


[June 4, 2017] - Light, clarity, perspective and focus - what we seek to have when dealing with complex situations where knowledge is obfuscated and obscured.  To see clearly the full panorama is often a luxury we do not always have.  Should we try to develop one process, which is slightly faulty, but which can operate successfully in most situations, or is it better to devise a more complex mechanism, which can adapt rapidly to a variety of situatons, but is more likely to be fooled by crafted countermeasures?  I spent the weekend at the Lake, mulling over these design questions...

[May 30, 2017] - Developed sAPL functions "Estab^marketnet" and "Actnet2", and after some headbanging, got the numbers right(!).. Wild Really quite a result. It is doable.  The AI-Augmenter is doable.  You can build a simple (but sufficiently complex to solve a real-world task), neural-network on a Linux desktop box, using Xerion, and then take the weights file and establish the same network structure in sAPL, and then activate the network, and get the same results as Xerion gives.  The function "Estab^marketnet" reads the weights file, establishes the network structure, and the fn "Actnet2"  runs the network (against training cases in var Example), and (as long as I remember to switch the default node transfer-activation function from logistic equation to hyperbolic tangent!), I can get the same numbers, running the net on the iPad, under sAPL - all in a less than 400K workspace.  It's primitive - but it works.   It also offers the possibility to design and develop a toolset that is unique to each researcher.  No information need be stored or maintained on any internet server.  If you use this AI methodscape, you can retain and ensure local operational integrity, regardless of what happens in the "cloud".  (Ever seen a thunderstorm up real close..?  That is what the future holds for us all, I suspect... You don't want to be dependent on an internet connection for your machine intelligence.  A little wee package can still have a useful little brain.  Just watch a mosquito.)

[May 29, 2017] - Developed sAPL functions "readfile" and "procwt" to read Xerion's .WTT file (the network weights), into sAPL.  Also wrote "tanh" to provide a hyperbolic-tangent transfer function, so I can activate (ie. "run") the trained network, on the iPad. Put the APL code in the "Code" section, for those who might be interested.

[May 26, 2017] - Included more iPad examples of what visualization graphics might look like for AI-Augmenter, as well as a bit of background info on attributes a network's training target should have.  Note: the full source code for Xerion is available at:  and the documentation is at:   You want to use the Xerion 4.1 version, the Xor2 network is trivial, but is a better "Hello World" exercise than the OCR digit recognition stuff TensorFlow suggests.  Note that the first url (the ftp.cs.toronto site) has all the Tcl/Tk stuff plus the Tcl extras, that you need also.  I may try and pull all my modified code together, and put it on the Github account.  Xerion runs under X-Windows, and seems to work fine under Fedora's Gnome desktop.  This is older code, but it is not burdened with a complex sack of dependencies (beyond the usual Linux stuff, of course).  

[May 25, 2017] - site cleanup - re-orged topline stuff, put economics images into Econ-2017, last year's market forecast ("Sept 2016 - Why the Stock Market May Move Higher...")  into Econ-2016, and the "APL on iPad" details in its own section.  If the new signed-boolean stuff has forward accuracy, I can create a preliminary version of "AI-Helper/Augmenter" on the iPad, using sAPL.  

[May 24, 2017 (pm)] - Re-ran with "quickProp" method, developed by S. Fahlman (see notes on picture).  Runs better (smaller error), and faster (less than 19,000 evaluations), and fits better.  (Actually, the fit is surprisingly good).   You can see I saved the network weights as both binary and text values ("uts_dumpWeights MarketNet MNnet40_v2.wtb" and "uts_saveWeights MarketNet MNnet_v2.wtt").    The website is bloated now, and I have to re-organize this page (I am getting red-flag warning messages telling me the site will load too slowly now...)   Apologies if it loads like a slug.   But the "quickProp" result on the signed boolean data, using a line-search instead of the typical fixed-step (epsilon of 1), shows surprisingly good correspondence between training data, and network forecast.  I wanted to get this posted, so people can see what is possible.  For me, this is basically a "Hello World!" exercise.  It is a simple network (30 input, 40 hidden, one node output), but even a simple structure like this, can yield effective, actionable information..  

[May 24, 2017] - Happy Birthday, Queen Victoria!  Re-designed the network, now running with twice the hidden nodes.  You can see the Xerion code I use to create and define the network in left-side window, cyan-coloured display, new pic..  Switched to using Xerion GUI version, which gives 1-button operation to some operations.  Running training now.  Total network error falls quickly, and ability of net to match input target looks better.  The 32-bit Intel box that Fedora+Gnome is running on works fine for this.   The tcl/tk interpreter is calling C programs (for the Minimizer), for the conjugate gradient evaluation and for the line-search.  It ticks along reasonably snappy.    Equity markets in Canada are choppy - analysts were not impressed by BMO earnings this AM.  ( much more money to the Cdn banks have to make, to impress people?  They are each earning at least $1 billion / *qtr*, and BMO just raised its dividend to 0.90/shr.  Fat profits and almost a 50% payout rate.  This is not good enough for you guys?  (BMO fell $3.00/shr in AM).   Crazy times.  "Money for Nothin', and Your Sticks for Free!", like Mark Knopfler and Snoopy used to say...

[May 21 afternoon, 2017] - Results...  Looks good.  This is a bit of a black art, it appears.  Using conjugate gradient, the training is faster.  But you want to use a line-search, rather than just moving fixed epsilon in the steepest direction, because directions can change a lot.  But eventually, the line-search fails, and one can go no further.  But then, you can switch the minimizer to using direction "steepest", and a very small fixed step, epsilon (0.001 or 0.0001), and just creep along the surface, like a blind man in the dark.   Not sure if this will really improve training, but I am still running with the overlapped data, each case only one day ahead, but with 5 day lookback for each series.  A good NN should be able to train to pure noise, if you let it run long enough, so the early line-search failures after only a few thousand iterations led me to suspect inconsistant data.  But perhaps the network can deal with the rolling overlaps.   The length of the gradient vector, |g|, is just hovering above 1, and training is continuing on the AI box, an old 32-bit Pentium running Fedora.  The screenshot above showing the last 360 "Actual vs. Predicted" cases for my boolean jumpdelta dataset, was generated just by imaging the Gnome Xwindows display screen with a little Samsung Tab-3 (SM-T310) running Android 4.4.2  (the old Dalvik VM).  Android 6.01 on a Tab-A runs better, and battery-life is vastly better, but the old Tab-3 running 4.4.2 is a fine piece also.

[May 20-21, 2017] - Update:  Got plotValues.tcl working.  Built trained network.  Shows Actual, vs "Predicted" booleans.  (see picture above).   Was not setting "tcl_precision" to max value of 17. (Default was 6).  Better training results now..  So, I have a trainable dataset now.  My Oxford Dictionary defines "naive" as "artless, amusingly simple".  Probably right.  In my naivety, I had thought I could use raw price data as input (despite scaling the Dmark data, years back in my first trials with this technology...).   Wrong.  Your input data has to be between zero and one (if using logistic activation functions), or (I hope), between  -1 and 1, if using hyperbolic tangent activation
 functions.   My attempts to train on raw price data, using an exponential transfer function on the final output node, failed.  Just doesn't work. The whole dataset would train to one value across all cases.  So, I had an idea.  I modified the MAKECASE function to create signed boolean vectors where -1 is down significantly, 0 is no significant change, and +1 is up signifincantly.  It runs with a filter-parameter that defines what "significantly" is - eg, 1%, 2%, etc.  Xerion lets me define the transfer (ie. activation) function as TANH, instead of the default LOGISTIC.   Tried this for both Hidden and Output groups.   The network outputs a result between -1 and +1 now, for each case.  Used "uts_groupType <netname>.Output {OUTPUT TANH SUMSQUARE} to config the final output node, and built a training case set as signed booleans.  (Xerion also allows "CROSSENTROPY" instead of SUMSQUARE, and also lets me create a cost.Model.)  The network now trains to a single signed boolean (trinary output).   Converted MAKECASE into MAKECASEBOOL, and wrote TRANSFORMBOOL fn to convert raw prices into a table of signed booleans.   This dataset can be trained, and looks promising.  But what I discovered, after only a fews thousand iterations (line search, steepest, very small fixed epsilon), is that I cannot train this data very well.  I cannot even get the sign consistantly right, before the "function is wobbly" message appears.  Now this is interesting, as it indicates the data is perhaps inconsistant.  (Using scaled price data, you can train right down to the noise, if you run your back-prop long enough.)   So, I thought about it and realized I am rolling ahead 1 day, and then taking previous 5 days of historical data, to create each training case for the Xerion "exampleSet".  In Edgar Peter's Chaos books, a similar problem was encountered with re-scaled range analysis (Hurst exponents).  You don't want overlap in the data, as it blurs the trials, and the overlap messes up the statistical property of independent exclusive trials, that I am pretty sure one needs.   If I am looking back x days in each series, I probably need to roll forward x days for each training sample.  I will try this idea.  I've tried several different network structures.  Just checked the AI box.   Training this time looks better.  I get long runs of several months, where the signs are at least right.  Much further work needed - but I suspect now that this approach has merit.  Typically, markets are *almost* random - but often exhibit periods of non-random behaviour for various time periods, when serious money can be made, just by taking a position, and then doing nothing. Jessie Livermore (pseudonym Larry Livingstone) was very clear in "Confessions of a Stock Operator", that he made the most money by "just sitting".  This seems to have worked for Warren Buffet as well.   I had a CP/M Z-80, when Bill Gates was starting Microsoft, and my first serious app for my new IBM P/C was written in MASM assembler.  But for some reason, I never bought Microsoft stock (too expensive?), despite telling folks that Mr. Gates would probably sell MS-DOS to every literate person on the Earth. (I did not foresee Windows.  Missed the class on the "Lilith" box at school.  If curious:   Buy-and-hold can work pretty good.  Best trick is to start when you are really young.  Have a good portfolio when you are still in your 20's.   You don't need "artificial" intelligence for that.  Just don't be unwise.   Anyway, this particular AI approach looks like it can perhaps identify (ie. "characterize") current market nature, and suggest when one might try to establish a position.  You might be able to use the old time-series serial autocorrelation stuff we learned in economics school to achieve similar results.  It works not bad for the bond markets, (they have high degree of serial auto-correlation), but I could never get any useful results for stocks, and it was dangerous as hell for commodities, given their characteristics of extreme reversals.  With commodities, you can make money for *years*, and then lose it all in a couple of ugly weeks when chaotic phase jumps happen.  (viz. the forward markets for crude oil, for example).  Even if you are right, you can still get killed if you act too soon.  Short oil at 120, knowing it is stupid-too high priced, (based on cost-of-production) only to be stopped out, at 130.  Do it again, and lose all your money on the final run to 140/bbl, before the massive reversal begins.  The best training for commodity markets is Vegas and Monte Carlo, as your key objective is to participate without suffering "gambler's ruin". (But that is a different model...)  

[May 17-18, 2017] - All nighters, days at Starbucks with the laptop... May is here, the apple blossoms are ou, and I am here, writing this...  Mkt is providing lots of thrills and chills - like an old Lou Reed song.   I built MAKECASE to construct the training cases, and have been trying to train down to a t0+4 price-point on a specific series, from a sequence of segmented series.  These combine cross-sectional and time series elements (basically 5 days history, across several different series), but reduced to a vector, for a specifc time point (one day).  I am now *certain* this process is driving markets in many areas.  This is dangerous, but mine is not to reason why.   It is difficult to train to a price target (I'm trying an "exponential" transfer function for network output - looks like a stupid idea, but I wanted to try it.)  I want to avoid working with "scaled" data, as I just find it annoying to use in realtime.   I have a simple network defined, (30 input units, 20 hidden, one output) in Xerion, and I can run a few thousand iterations, before the line-search fails.  But it does not want to work with un-scaled data.  I have only 4360 training cases - tiny by modern standards - this is really almost a "back of the envelope" exercise - but I found that simple stuff actually is pretty robust. (If you are bulletproof, you don't have to drive fast, right?).  

Anyway, I wanted to see if I could use the exponential transfer function to just train to a future price, but it just does not work well.  The exponential transfer function is typically used for "softmax" training (training to a 0 or 1), and also with "cross entropy" minimization, instead of minimizing the sum of the squared errors.  These options are configurable on Xerion.  My network is called "MarketNet", and one can use the command "uts_show {uts_net[0]} to view the details.  In the bp_sh (the back-propogation shell), you have all these command options (eg: to randomize the net:  "uts_randomizeNet MarketNet" will populate the network with random values before beginning training.)  You select the minimizer, give it a short name, config the search methods and step-sizes or types (the epsilon), and you can run training.  I wrote a trivial .tcl function, which can be sourced, to view the "target" from the training cases, versus the "output" of the network.  In "bp_sh"  (the Xerion/tcl cmd shell), you can then enter: "compareMarketNet", and get a quick picture of how well the current training attempt has worked.  i post some code and examples somewhere later, once I get this working right.

For the old stuff I did years back, I scaled the data between zero and one.  But you have to unscale it to use it, of course.  But I had this idea.  You really want probablilities anyway, so, I will modify my MAKECASE program to generate binary values:  0 => forward mkt value does not change much either way, -1 => forward mkt is down significantly, and 1 => forward mkt is up significantly. Then, the network doing this "softmax" training should basically give me a probability estimate of what is likely to happen to the specific price series I am using as my training target.  Looks like the trick is to use a hyperbolic tangent activation function, (values between -1 and +1), although exponential (values between 1 and infinity) is what is typically recommend for softmax-type training.

Oh, a little note on MAKECASE.  What a pain!  Initially, one thinks, "oh just build a big matrix, and slice it row-wise" to get the series day-segments.  But of course, all the series have *different* holidays and other off-days, so they don't line up.  MAKECASE has to select one series (in my case, I use the SPX), and then conform all obs to those active day values.  The logic requires that, for any given day, you look back a specified number of days, and a collection of these day-segments forms your training-case for that day, along with the target you are training to.  Turns out that is tricky, but doable.  But you have to process each series carefully, and check for missing data, and such.  What is interesting about this approach, is it should obviously scale, and be applicable in other areas.  One uses Hurst exponents (re-scaled range analysis) to deterimine if the data is trending, random, or mean-reverting.  It's surprising how many Hurst exponents are right around 0.5 now (its on the Bloomberg, has been for many years...).  But just because the series looks pure-random wrt to itself, does not mean that it's cross-elasticity is not a factor wrt other data vectors.   (The danger is of course, the illusion of linkage, when none really is present.  But the flip-side is worse, no?  You have a pretty clear linkage, and you miss it, leaving all the money to be hoovered up by flash traders.  Cool)

[May 11, 2017] -Still messing about with what to train to.  I don't want to just forecast, I want a more subtle picture of the future, where the AI can suggest the nature of conditions.  I am thinking I probably need to train to a generated boolean vector which can be interpreted in some sort of quasi-probabalistic way.   Playing around with ideas in APL...

[May 7-8, 2017] - Enough data-cleaning to have a simple prototype soon,  I hope - by tomorrow or the next day.   I read this Issac Asimov short story when I was very young, about a group of scientists working on force-field technology, and they started having mental breakdowns.  Once scientist suggested that humans were just lab test creatures, and that in the same way we ring-fence dangerous bacteria cultures with a circle of antibiotic (penicillin - C9H11N2O4S), humans were ring-fenced by those running the experiment, and the problems the team was facing was due to the potential effectiveness of the force-field technology they were working on.   The technology would allow protection from nuclear weapons.  The lab-rats-in-an-experiment was the idea of the lead scientist on the project, and his "psychological penicillin ring" theory was accepted to keep this key guy working, despite his delusional state.  It was a great great story, because it contained a unique theory of evolutionary human development that linked technological progress with progressive social jumps.  I searched thru my old books, and found the story, and the paperback cover is shown above in the ISS HDEV picture.  It was "Breeds There A Man...?", first published in 1951 in Astounding Science Fiction (now Analog).   Sometimes, I feel like similar things are occuring on this AI project.  I am beset by curious events, which constantly prevent me from making progress.   Rain, which fell for several days, and flooded the fields.  Trees, which were uprooted by winds, and hung at 30-degree angles over the power lines into the lab here.  (I took them down myself, with winches, a tractor, a series of ropes and pulleys, and a chainsaw..)  And yesterday, the machine running the Xerion Dmark demo crashed as I was cleaning a wad dust from its front (I touched the boot switch?).  And the awful mess of the data - full of missing observations, many more than I realized.   But, I rebooted the Xerion demo box, and ran the network to train down to an old sample segmented time-series,  got GNUplot and GS (Ghostscript) working right, and confirmed I can build my training-case file, and run a "compareNet" and generate a visual of actual vs network training target.   I compared "fixed-step" (using an epsilon of 0.1, versus a line-search (in Xerion, "Ray's Line Search"), and the training times (in both cases, using conjugate gradient direction method), and the training drops from roughly 70,000 iterations to around 400.  The heuristic algo seems to be run line search on conjugate gradient for the first 300 or 400 iterations, then switch to a fixed-step, and you can train right down to noise, if you want to.  This technology works.  I suspect TensorFlow has all these kinds of intrinsics probably just built-in.  This is why I am stepping thru the process using the older Xerion code, so I can try to get a "through the glass, clearly" feel on how it works.  The original quote is biblical, is it not?  Something about "through a glass, darkly?" [Edit: yes, it's first Corinthians, verse 13:12. And it's also a great 1961 Bergman film.. ]

[May 4, 2017] - Still doing data-cleaning... Also, I download all the "Mplayer/GMplayer" code, and built "MPlayer" (and the desktop GUI version, called "GMPlayer") for my CentOS Linux box from source.  You can get the code I used here:  This version is from 2016-01-24, and includes FFmpeg 2.8.5 in the tarball. To build MPlayer, you can create a source directory /usr/local/src/mplayer, download the tarball to that directory (I used MPlayer-1.2.1 as it looked quite stable), run gunzip to unpack the zipped file, then "tar -xvf" to untar the ball and create MPlayer-1.2.1 source directory structure. Then, you just cd to it and do the usual "./configure", then "make", and then "make install" from a command line shell.  Make sure to include the "--enable-gui" parameter to "./configure", or you only get a CLI (command line interface) version of mplayer.  When I tried to configure, I got a message saying I needed "yasm", which turns out to be an open-source assembler that mplayer uses for some of its lower-level stuff.  So, you go get "Yasm", do the same exercise - create /usr/local/src/yasm directory, download the tarball there, unzip with gunzip, untar with tar -xvf, and run the three cmds, ./configure, make, make install.  That should install a working version of yasm.  Check it by entering "yasm --version" at command shell.  Here is Yasm url:  Having an open-source assembler might turn out to be useful here.  MPlayer, of couse, is used to watch video files, or listen to music files.  The program "gmplayer" pops-up an on-screen controller, which can be used to choose files to play, and/or create playlists.  You have to setup a default "skin" to see the gmplayer controller, which involves another download of a tarball into /usr/local/share/mplayer/Skin (I got Blue-1.12, at this url: , and used "bunzip2" to unzip the ball.  Then, I had to copy the the contents of the "Blue" directory that was created, into subdir called /usr/local/share/mplayer/skins/default, in order for gmplayer to actually work.   This process builds executables mplayer and gmplayer in /user/local/bin.  Create a launcher icon on your desktop to run "/usr/local/bin/gmplayer", and you will have one-click sound and vision!  There is method to my video-madness:  Ideally, I would like to have my neural-net take input from a real-time market feed, and output a real-time video display which would augment one's our own ability to develop a market "picture". Cool

[Apr. 30, 2017] - (Doing this edit on my CentOS box, running Linux.  Works good.  Same box as I run my Rails webserver on, which keeps track of news-stories in a little SQLite database. Using Linux feels like being let out of that Apple-Microsoft jail..."Free! Free at last!" )  So, I built the MAKECASE program, to run thru my little database of time-series tables, and build a single table where each row contains a vector of data observations (mostly prices) for a given date.   MAKECASE takes a single vector of series numbers, and returns one big table, where each row is a date, followed by a bunch of observations, one from each series.  For the old Dmark stuff, I scaled the series-segmented data to fit between 0 and 1 (trains better, given the sigmoid transfer function), but now, TensorFlow can use logit, which has faster rise-time, which might be better.  Will try a first version on Xerion, without scaling.  Then will attempt to replicate same training using TensorFlow - as my first attempt to use it for something real.   MAKECASE still does not create the training target, which will be a "market characterization vector".   I'm thinking maybe take one key portfolio element, and cast its direction, intesity and dispersion, and try to train to that.  Or maybe a percent delta of price beyond a noise-filtering threshold?   The real key here is keeping the data clean.  And I should have *two* datasets, so I can see if I have just trained down to noise (you know this, if you train fine on the first, but executing the 'net on the second set does not show any success beyond randomness.)    Or maybe I should just reduce the training attempt to a simple binary value: 0 = do nothing, and 1 = take a long position for a specific time window.  How far should MAKECASE try to look ahead to code its target?    I suppose ideally, you could let the network look at *everything* by *every time lookahead", but I want to narrow down to something specific, so I can evaluate its effectiveness, and value.  Collapse the "market characterization vector" down to a single riskoff/riskon binary value?   That way, I am not trying to forecast and I may get something useful.

[Apr. 27, 2017] - A data-provider I use to keep a database current, disabled their traditional .CSV access methods, and replaced this simple tool, with an interactive process that creates .GZ files for download.  So, I had to re-write the data-retrieval method I use, creating my own little script-driven robot to access the data and unzip the .gz files as required.   Everything works again, and I can move forward on the AI neural net tools.  Will create a first-pass of the database inversion tool, to prepare the cross-sectional training cases, which will train to a characterization vector, created from near-future events.  In this way, I hope to sift out true-trends from the market noise.

[Apr. 09, 2017] - Google's "DeepMind Technologies" group in London have just open-sourced their "Sonnet" product.  This might be a big deal.  Sonnet sits on top of TensorFlow, and lets it be used to create complex neural networks more easily.  I am interested in trying it.   I've had a large tooth removed, have more dental work scheduled next week, and have to do a lot of tax work to file personal and corporate income tax and HST forms for farm and firm. Dealing now with pain...  [Update:]  Just went thru the *DeepMind* website, quick scan of their Github stuff, & read their paper on "Learning to Learn".  Imagine if US rocket-pioneer Robert Goddard was transported to a 1990's launch of the space shuttle - that's how I feel after this quick scan. These guys look like they own the AI field now, especially since they have the resources of Google behind them now.  They look to have infinite power, both in CPU cycles and cash!  Oh my... Crying

My only chance here is these guys like chess I *hate* chess with a passion - as I detest most closed-environment gamey stuff.  Game-playing is time wasted.  All the interesting stuff and the stuff that matters - that makes a difference to the future, and drives humanity forward - lies in the open, *unbounded* realm of the pure real - the place where neural networks often collapse and fail badly, typically.   But you can use NN technology to *augment* human intelligence - like lenses can help your eyes see better, amplifiers can let you hear better, and computers can let you organize and process information better. (And yes, like a M1911 .45 can be used to punch a hole thru your advisary better than your fist can - let's be honest.)  In a formal system that is tightly-bounded by rulesets, and distributions are known, a well-built AI will *always* win.  What about open scenarios, where there are no formal rules, and the rate and intensity of change itself is also dynamic?  Can an AI help?   I am pretty sure it can.  And I think I know what it has to be able to do.  The AI does not replace or overwrite the human agent, it augments his ability, and lets him make better decisions, quicker, and with less of the errors that behavioural economics shows us *really* do occur.   I'm not in this for the money.  I want to prove a point, more than anything, and build a device.  We need AI technology like soldiers need rifles.  This technology could aid us all by letting us make fewer mistakes, and avoid the "Wee-Too-Low! / Bang-Ding-Ow!" outcomes that are become increasingly common in our modern world.  Perhaps I still have a chance... Blush (I put a picture of my primitive Analyzer tool output, essentially a first cut of the Augmenter I envision , running on a Samsung Tab-A, under Android 6.01, at screen bottom.  It shows the M-wave Analyzer output, calculated and displayed on the Samsung tablet, and an estimated probability density, which suggests trade size for a given risk level.  It essentially suggests how big you should bet, given the risk level you want to accept, and shows it all as a picture, so you can see exactly what you are dealing with, given the data-range you believe is appropriate for the current picture-of-the-world your necktop neural network tells you is now in play.  You can see where I am going with this, yes?)

[Mar. 31, 2017] - Got Xerion running with original late 1990's data (Dmark segmented time-series network).  Ran with many different types of training - confirmed it all works.  Xerion looks to be predecessor product to TensorFlow in many ways.  Using simple steepest  descent (standard backpropagation), with fixed step and epsilon of 0.1 it can take about 90,000 itereations to train down to the noise in a segmented timeseries. But use a line-search, and conjugate gradient with restarts, and you can get to the same level of training (essentially, just overfitting a timeseries to check limiting case of training algorithm), and Xerion will fit to the curve in about 300 to 400 iterations.  It's a pretty dramatic difference.  My original approach was quite wrong (using a single time series, segmented into cross-sectional training cases).  I have a new idea, based on current practioner methodologies, that looks to be much better.   I'm having arguments with a PhD type, who thinks NN tech is useless for market phenomenon (he is a "random walk" believer, it seems), but given modern state of the NN art, I am pretty sure my new approach can be useful.   I note with interest that Dr. G. Hinton (Xerion & TensorFlow AI academic guru), and Edmund Clark (former CEO of TD-Bank in Canada), will be setting up a new gov't funded "Artificial Intelligence" Institute in Ontario, based in Toronto.   Two new charts at page bottom - as Ghostscript image of the original Xerion-driven DMark series (raw price data scaled to fit between 0 and 1) training versus network output, and today's Cdn dollar chart - showing the complete NON-RANDOMNESS of the modern markets.    Markets are not random, they are chaotic.  The "random walk" picture of the world, where you believe in stable distributions, and build models that use distribution-tails to estimate your risk is wrong.  It has already given us the 2007-2008 financial meltdown.   Today, the Cdn-dollar chart looks like the output from a square-wave generator.  It's not random.  It is just one example of many that you can see *every day* in the markets. 

I've been stepping thru backpropagation by hand, using basic partial differentiation calculus, and the chain-rule, just so I can clearly understand the original idea.  I learned some C++ also.  Downloaded Alan Wolfe's NN sample code, only to find it won't run on my Linux CentOS boxes, with gcc 4.4, because of some new loop-construct recently invented and slotted into Clang or LLVM or whatever the heck the kids are now using - something from C++ 11 or 17 or Apple's lab.  More reading to do. This project is taking on a life of its own.

[Mar. 24, 2017] -  Completed prototype of neural network definition and activation routines in APL on iPad.  Great having a working spec - trivial Xor2 net - can train it on Xerion, and activate/execute the net on iPad using APL (which is great for matrix stuff).  See page bottom for picture.  Numbers match, Xerion in Linux, iPad using APL, for trivial toy case of Xor2 network.

[Mar. 17-20, 2017] -  Working on "cross entropy" idea, which drives how artificial neural-networks are trained.  The idea is that the initial (actual) probablility distribution is mapped, by the artificial neurons in the network, out to a posterior target distribution - and that there are different entropy characteristics across the various possible target distributions.  One seeks to minimize the "Kullback-Leibler divergence" or the entropy difference between the initial and the posterior distributions.  This sounds quite complex, but if you are using "one-hot" encoding (for example, trying to identify written digits), and your initial disribution is simply "0 0 0 1 0 0 0 0 0 0" - ie. your number is a "3", then the cross-entropy summation of the initial probability distribution values times the posterior generated distribution - boils down to taking a single natural logarithm of the sigmoid or logit value (ie. the probability-like number between 0 and 1)  that the network generated.    You can use a gradient descent search to drive your back-propagation, but the "stopping point" of the network training will be when all the cross-entropy values between the initial and posterior probability distributions are as small as possible.    It should be possible to make your network "recognize" with a high level of accuracy.  This recognition can extend to more than just written digits.   One should be able to create an artifical "Helper", that has superior recognition ability, for whatever you train it for, given you can "show" it enough accurate raw data - what we used to call "training cases".   I suspect "Helper AI" technology might become a must-have tool as we move into this brave new world.  (I really wanted to get a TensorFlow AI running on my iPad.  My vision for this was Issac Asimov's "Foundation" series - where Hari Seldon had this "probability calculator" at the first chapter, set on Trantor.  I can't get Numpy to load thru to Python yet on the iPad, but looks like Xerion might work...)  I am thinking of asking a Japan company to design a special Hyper-tablet device for me - but running *pure* Linux, no Android or iOS stuff in the way... 

[Mar. 14-15, 2017] - Fell down a big rabbit hole. Decided to look at my old Xerion stuff, and got obsessive about it and decided to convert 20 year old Uts/Xerion to run on a modern Linux box.  Xerion was the U of Toronto product built by Drew Van Camp and others, offered by Dr. Hinton's group to Canadian industry, as it was funded by a gov't grant process.  I took it and ran, and built a Linux box using Slackware Linux just to get Xerion running, and build some neural-nets to investigate time-series data.   As I dug deeper into TensorFlow/Python, I realized it looked a lot like UTS-Xerion/Tcl/Tk+itcl+Tclx - which I know well.   Learning is all about jumping from one level to another.  Getting Xerion running on a modern Linux has been a bit of work. (Just getting a clean compile of the code using a modern gcc was non-trivial) .  But I can run the original Xor2 example and it all seems to work well.   Having Xerion running will be very useful, as I can verify TensorFlow builds against original Xerion efforts.  Xerion is not convolutional, but it did offer a number of alternatives to basic gradient descent, which - in the example of training a boolean net like the Xor2 example - can be shown to be useful.  It's also a good learning tool, with nice visualization.  (Screen shot of Uts/Xerion is below..)  (Mar.15:  Fixed a bug - Network Node Unit & Link Display not working, fixed.  Built Xerhack, a visualizer toolkit uses Tk Canvas.)

[Mar. 8, 2017] - Got image hacking stuff working in Python on both Mac OSX and Windows.  Took the Ripples-in-Pond Tensorflow example, and made it look more like exploding stars in a dark star-field.  Runs *without* IPython, Jupyter and Python Notebooks (displays 5 images in sequenece as .jpg files, uses SCIPY and Pillow version of PIL (the famous Python Image Library)).   Images are interesting - like a star-field evolving over giga-years (see picture above.)   Here is part of the code:  (Click "Code" in top menubar for the rest of it...  Big Grin)

    # --- the Tensorflow LaPlace Image example (Uses PIL, and scipy.misc)
    # --- Modified: Mar 7, 2017 - by MCL, to just use image file display
    # ---                                       instead of Python Notebooks, IPython, etc.,
    # ---                                       with negative damping and darker image backgrd.
    # ---                                       (Instead of ripples in a pond, we have
    # ---                                       exploding stars ... )
    # --- Produces Initial image, 3 intermediated images, and the final image
    #     as .jpg files. Requires only: tensorflow, numpy, scipy and Pillow
    #     and Python 2.7.10.
    # --- This example taken from Tensorflow Site:
    # ---                           
    # --- and provides a nifty example of manipulating n-dimensional tensors.
    # ---
    # --- For Python newbies (me!):   1) invoke Python in terminal shell
    # ---                             2) >>> execfile("")
    # --- focus on understanding exactly how Tensorflow is reshaping tensors
    # ------------------------------------------------------------------------------------------
    # --- Import libraries for simulation
    import tensorflow as tf
    import numpy as np

    import scipy.misc

 <<< The rest of the code is in the "Code" section. Just click on "Code" on top menubar >>>



[Mar. 1, 2017 ] - As mentioned previous, I have Tensorflow + Numpy running on Python on the MacBook OSX now, and have got TensorBoard to show node names finally. This is the first trivial W = m * x + b (Linear Regression) program one can run, using gradient descent method to do the least-squares regression line. I've updated the two pics showing TensorBoard's display of a process graph for linear regression (now with variable Names!), and the Python+Tensorflow code example.  I've also posted these to the GEMESYS Facebook site.  Next, I want to 1) create a very simple neural network, and 2) read a real data data file of training cases, and produce some real output to a file. There is a lot of useful information on StackOverflow and various websites built by clever folks.  I've learned a bit just reading the StackOverflow queries.  I was sold on the NN methodology in the 1990's.  Xerion used Tcl/Tk to provide visualiztions, which I used to develop in (and still use!), but I typically ran my networks in background mode, and used GNUplot and APL to chart the prediction curves.  I have these old C programs I used to chop up data series, and I am itching to drop some of the old training files into a modern Tensorflow net.

[Feb. 24, 2017]  - Tensorflow is a bit more involved than Xerion, Prof. Hinton's U of Toronto product from many years back.  Here is my first hack, getting the basic tutorial running, with a trivial linear regression, and viewing the graph in TensorBoard, which one does using a browser session to localhost, port 6006.  To get the graphic working,  you slot in the statement "writer = tf.summary.FileWriter('/Path/to/logfiles', sess.graph)", before you run your training.  This writes event log data for model structure to TensorBoard log file directory, and the visual image of your model to be generated.  Very, very cool.  I put two images at *very* bottom of page, one showing the program text for my modified version of the TensorFlow "Getting Started" tutorial with simple linear regression model Y = m * X + b, and the generated TensorBoard model structure image, which is viewed using Firefox browser on the Macbook.

[Feb. 21, 2017]  - Ok, got it. Finally got TensorFlow installed and working. Gave up on the Linux box, as it runs some production stuff on news-articles that I need.  Used the Apple MacBook Pro with Yosemite (OS X 10.10.5), which had Python 2.7.10.  Was a complex project, but got it running.  Apple had Python 2.6 running by default, and I had installed Python 2.7 with "numpy" (the scientific numeric package for Python - its just the old Fortran math libraries, which I used to use at Treasury for bond-math calcs and econ-research).  Had to get the Python "Pip" program working, and first install of TensorFlow with Pip smashed everything, due to a flaw in pyparser stuff.  Had to manually fix a Python prgm called "" in directory /System/Library/Frameworks/... tree, as well as disable the original "Frameworks" located "numpy" and "six" modules.  This was critical.  The TensorFlow Python-pip install caused pip, easy_install, and the lot, to kack fail bad.  And the Frameworks directory tree Python modules (some Apple standard?) caused Python to always load the old Numpy 1.6 and six 1.4 versions - and TensorFlow needs 1.12 Numpy and Six version 1.10 or higher.   Until I fixed the "" parser stuff, and disabled the Apple-located default numpy and six, TensorFlow complained about wrong versions. What is silly, is that "Pip" (the Python Install Program), drops the updated modules in other dir, and until the ones earlier up the path are removed (Eg. from numpy to  numpy_old), Python keeps loading the old ones, even after one has run pip and/or easy_install, to load in the new ones.  I put a note on StackOverflow and posted bug and the fix, on Github/Tensorflow, search for Gemesys.  Bottom-line, is I was able to run the baseline TensorFlow tutorial, and make it print 'Hello TensorFlow!'

[Feb. 19, 2017] - I hate Linux dependency issues. Tensorflow requires glibc 2.14 and my CentOS 6.6 box has glibc 2.12, etc. etc...  TensorFlow wants Python 2.7 (or 3.5), but CentOS 6.6 is default Python 2.6.6, which "yum" needs to work, so I have to try virtualenv, or whatthef*ckever.   I've tried several tricks to get TensorFlow running, but no luck even on the Linux box.     I had hoped to put some datascience stuff on the iPad.  I have APL running, and GNUplot can do non-linear regression, but I was hoping to make a neural-net that could be trained on a GPU-Nvidia type Tensorflow box, and then just run on the iPad.  So far, no go.

[Jan. 27, 2017 - Started working with Tensor Flow, trying to doing some gradient descents across a custom phase-space.   I attended Jeffery Hinton's Xerion lectures at UofT back in the 1990's, and I built some neural nets using Xerion NNS to backtest commodity markets.  They worked, actually, and I had planned to publish a page on Xerion and Tensor Flow...  but I got very ill - some kind of flu thing which involved a 'cytokine storm'.   I'm recovered now, but it was touch and go.  Wanted to publish a page with a running Xerion net (or Tensor Flow example) being back-propegated, on the iPad.  Apple is a serious monopoly, and AI is real and perhaps dangerous.  The idea is to have a hand-held device that can provide real-time decision-support, but is not connected to any data link - what used to be called "air gap" security.  [Note: It is estimated that more than 70% of all trades on equity markets now are algorithmically driven.  If built right, they provide a real edge. ]  For info on air-gap security, read Bruce Schneier's piece here:    The Dow 20,000 thing is a bit of a concern.  There may be too much digital cash floating around.  Historically, the markets have been very effective at removing excess wealth.  If interest rates move up quickly, equity markets could fall 20%.  That is DOW 16,000, and it may happen at "internet speed".  The current stability may be a dangerous illusion, as powerful forces pull our world and its markets in divergent directions simultaneously.   ]

[ Dec. 13, 2016 - Got "DOSpad Math" compiled and deployed successfully to iPad 2, using Xcode 6.2.3.  Insane work. Also, updated "Time Travel" page with Harlan Ellison montage. (Click "More" button on top line right to show "Time Travel Research" page) ]

[ Dec. 7, 2016 - OpenWatcom Fortran77 on the iPad  - details ]

[ Nov. 28,2016 - Included info on how to get Python 2.7 running on iPad ]

[ Nov. 03,2016 - Added page: How to put VLC 2.1.3 on iPad-1 running iOS 5.1.1 ]

[ Oct. 23,2016 - Added page on "GNU gcc" = How to compile & run a C program on iPad ]

The Hack Which Launched this Site...

I put this website together after I hacked my old iPad, and felt I should publish the method, as it turned the old device into a very cool experimental platform, and a surprisingly useful research tool, as it is possible to obtain most of the Unix/Linux utilities from Cydia, and configure Safari to be able to directly download viewed content (eg: videos, .PDF files of important documents, etc.)  As well, there are application hives, or "repos", which offer very useful utilities, such as "iFile", which allow navigation of the native file system.  (One uses Cydia to add "sources", such as "" and "" to gain access to these additional applications).   (Further, if you use static IPv4 numbers on your local WiFi-enabled LAN, you can seemlessly transfer files between the iPad and either Windows or Linux machines.)

I've provided detailed instructions for "jailbreaking" the original iPad.  Once the iPad was opened up using the "Redsn0w" software,  Cydia was used to obtain *root* access to it.  It is our belief that *root* access should be provided to all devices owners, if they request it.  ("root" is the User-id that provides full, administrative control in any Unix/Linux system.  It is like the "Administrator" account in Windows.)  It is a lawful act to obtain this access - known as a "jailbreak" - for any device which you own.   And by doing this, you can open up the range of applications and technologies that the device can address, regardless of the restrictive trade practices that device makers employ to limit the capability.

Once the iPad was unlocked, and SSL and SCP were configured and made available, I was able to install sAPL and APLSE on it.  I also installed Borland 3.0 C++, and compiled the Blowfish encryption algorithm, to confirm that DOSpad (the PC-DOS emulator available for the iPad) behaved correctly.  The generated .EXE files for Blowfish on Android with gDOSbox, Windows XP/SP3 CLI (Command Line Interface), and those compiled on the iPad under DOSpad are all isometric. 

I've also built and deployed thru the Google "Play Store", some interesting apps on the Android platform.  These include gDOSbox, GNUplot37, and several APL interpreters.  The Android software is experimental, and does not contain any usage tracking or in-app advertising.  I did this project mainly because I wanted to run a real APL on a tablet, as APL was the first language I learned, at University of Toronto and Univesity of Waterloo. 

APL was (and is) unique in that it provided real-time, interactive computing, before the advent of personal computers and tablets.  Ken Iverson, the inventor of APL, originally developed the language as a notational tool to express algorithms.  IBM then took the idea, and built the interpreter.  Personal computers - which ran only APL! - were developed and sold by IBM in the early 1970's. (A prototype was made available to some clients in 1973.  It was a complete personal computer - called "Special Computer, APL Machine Portable" (SCAMP), and it ran APL.)  For those of us involved in computing in those early years, APL was the only real-time, interactive computing environment, and it was the first desktop, personal-computer system, as well.

So I just had to put APL on these little tablets. Big Grin

The website here is a work-in-progress.   It consists of:

  - APL on an iPad  - the notes on how to hack the iPad, and open it up to installation of non-Apple/iTunes software.   Also includes a link to my github site, where a zip file of the P/C version of sAPL files can be obtained.  sAPL is freeware, and can run in "Cmd" shell on any Windows machine, as well as Android gDOSbox, or iPad DOSpad.  (See below)

  -  GEMESYS Apps on Android - just a short summary.  This software is experimental, and is provided primarily for educational and recreational use.  Google keeps changing Android, and this makes the Android environment fragile and unstable.  Note that if you are running Android Lollipop or Marshmellow, you will need to download and make as the default, the "Hacker's Keyboard", to use the GEMESYS Android apps now, as Google has altered Android system keyboard operation.  (See below...)

  - Fractal Examples on iPad using APLSE  - I show two recent images generated using APLSE running on the iPad. (Also down below...)

  - GNU gcc & Python 2.7 - How to Compile & Run C programs natively, and install Python  - Application development for tablets typically involves IDE's and a bunch of stuff to fabricate vendor-locked packages.  With a *jailbroken* iPad, you can load GNU gcc onto it, and develop and run C programs right on the device. The underlying iOS is very unix/linux like, and can be used effectively on its own, as a fully functional computer, once tools are made available.  Python 2.7.3 can be installed also. (First button, top line)

  - OpenWatcom Fortran-77 - How to run Fortran on an iPad - This is another DOSpad trick, where OpenWatcom Fortran77 is shown configured and running on the iPad. 

  - How to Put VLC on iPad-1 - Apple will not let you access the older versions of applications from their iTunes/iStore.  They want you to buy a new device - each year, it seems.  But if you jailbreak your iPad, you can get the .IPA file from the VLC archive, and install it with Install0us.  VLC is fully open-source, and will let you watch downloaded .FLV (Flash Video) files.  VLC 2.1.3 for iPad-1, running iOS 5.1.1 is Taro-approved.

  -  Pictures from Space - I have a research-interest in Chaos Theory, and fractal geometry, turblent flow, and so on, with specific reference to the Capital Markets.  Images from  space show astonishing variety of fractal examples.  The recent Juno probe shows amazing images of the turbulent flow of the atmosphere of Jupiter. (Second button, top line). The ISS also shows wonderful space-views of our home-world.

   -  Economics and the Stock Market.  (What I studied (officially) when I was at school).  And since we pay the bills as much by our investment results, as by our consulting efforts, the markets remain a constant and critical focus.  I will try to note some useful observations here. (Third button, top-line)

  -  Statistics & The Null-Hypothesis.  A very great deal of what is written about statistical methods, and the mathematics of data-science oriented research, is either incoherent or incomprehensible.  I ran across this well-written note, and before it is vandalized by professional statisticians who seek to raise the barriers to entry to their dark-arts, I thought it should be preserved.  I will try to add some clear examples of actual research.  I used to use SPSS, SAS and R.  Awful stuff, but data analysis can yield wonderful dividends, if it is done right, and you understand *exactly* what you are doing.  (Button 4, top-line)

  -  Hausdorff (Fractal) Dimension Examples and Explanations - lifted from other websites (which may change).  The examples and explanations are good, and I wanted to preserve them. (More button / top line)

  -  Images and notes on Time Travel (Why not?  It's my site!) {#smileys123.tonqueout}And who does not love the idea of Time Travel?   We are all time travellers, aren't we?  The past offers us insight, and the future, opportunity.  But what will the future hold - pleasant dreams or our worst nightmares?    (More button / top line)

Any comments or questions can be addressed to gemesyscanada < a t > gmail dot com.  (I spell out the email address here to limit the spam robots from mail-bombing me.  I trust you can understand the syntax.)

  -  TensorFlow/Xerion Neural-Network Development.  This is my latest thing, and I hope to use this new (old) technology to pull together a number of threads, and get to a better method.  If Thaler's work is right (based on Kahneman and Tversky), my weakness and deep loss-aversion will just keep me from taking action, when it is needed most.   It appears one must effectively automate all investment activity, if one is to have any chance nowadays.  The low-return world demands it, as do the AI/algorithmic-driven modern markets.  One cannot fight the world - one must dance with it. Wink Note - I started out planning to use TensorFlow primarily, but I could not get it to run on my Linux boxes.  I finally got it running on my MacBook, but I found I was also able to get Xerion running on my modern Linux machines.  Xerion is the Univ. of Toronto product Dr. Hinton's team developed in the late 1990's.  It is written in C and Tck/Tk, and it is also complex, but I know it well.   I had originally run Xerion under Slackware Linux, in 1995-8, and had built neural-nets to forecast commodity markets.  At first, compiling Xerion under gcc generated a blizzard of errors. But I made a number of minor changes, and used a 2008 gcc 4.3.0 version (with some custom-hacked stuff to address gcc 1990's-isms), and also downgraded Tcl/Tk from 8.5 to 7.6.   The running Xerion (with examples shown) runs on Fedora Linux boxes, and works surprisingly well.  Much better than I expected, actually.  I re-ran some of my old stuff from the 1990's (the D-Mark forecaster) as a regression-test, and confirmed I could generate exactly the same results, right down to the GNUplot graphics, viewed using Ghostview (I'm using GPL Ghostscript 8.6.3), as GNUplot will generate both .jpg and postscript output files).  I hope to transition some work to TensorFlow, soon.  But the Xerion stuff - using this signed boolean jump-delta idea, seems also to work *much* better than I expected.  It is actually kind of exciting, truth be told.  I have this sAPL workspace, "neuralxr", running on the iPad,  which I think I can extend to basically run (ie. "activate") the Xerion-trained "MarketNet", for an experimental series I have been focusing on for years.  If you look carefully, you can see the target is CM.  I use CM because it has unique, serial-autocorrelation characteristics - like a junk-bond, actually.  If you think of equity as basically a 100-year bond, then this stuff, with its curiously high yield, is basically just a long-duration not-really-but-trades-like-it high-risk, high-yield bond.   I have no formal connection with CM of any kind, except a small LOC on my farm (full disclosure) from them, which is undrawn.  Another property that makes CM unique among Cdn banks, is its historical commercial roots.  They are risk-taking real bankers, who get out and make loans.  It's a risky business, but it is also very profitable.  And I have an old high-school buddy (more full disclosure) who runs a major regulatory organization that manages the macro-prudential systemic risk monitoring of Cdn SIFUs, and I am confident his guys are doing their jobs.  But let me stress, I have no special knowledge, beyond what I read in the papers, and on the wire services.   Banking is just one of those wild-good business models.  As long as you don't blow-up, given the modern world (buckets and buckets and buckets of fiat money created everywhere, all the time, by just about everybody - and without any recourse required to turn it into gold or latium-bars or anything but computer bits), banking only really has the system-risk of hyper-inflation that it has to deal with.   In a world awash with fiat-cash, even if you make too many bad loans, as long as you ensure adequate collateral (Canada has a long tradition of 75-25, for example - banks won't loan more than 75% of value without CMHC or someone else taking the hit if the loan sours), then worst-case, you stop making money for a while.  For example, on my farm, which is worth 7 figures maybe?, and has *no* mortgage at all, the LOC is only 5 figures, and is not even drawn.  In the part of the Province where I live, this is typical.  Farms around here often sell for cash, or with financing arranged by family connections.  Yes, the large commercial loans banks make can go south, and then you have to set aside reserves.  But the capital requirements are tough and fiercely enforced here.  As we drive towards the future, Canada looks more like the Switzerland of the Americas, rather than the "Argentina of the North" some used to term it.   I also target CM in my NN example because it is a good trader's stock - lots of action, whether you like it or not.  The jump-delta table wants to be full of lots of -1's and +1's, not just a bunch of zeros, right?  So it is obvious then, that you want to train to a target that demostrates beta greater than one, and has a Hurst exponent that does not converge on 0.50 over time.

Neural-Net run on iPad using sAPL

I have hacked and "jailbroken" my iPad Gen-1, and have loaded sAPL on it.  This was the APL product I originally released on the Blackberry Playbook, and remains available for Android devices, from the Google PlayStore. (A Windows Cmd-shell and/or DOSbox version of sAPL is available from the GEMESYS Github account, as a .zip file.)   sAPL is a P/C version of the original IP Sharp APL mainframe product, which ran on IBM 370's, and Amdahl V8's.  This iPad version, running under DOSpad, provides a workspace just over 300K.  It is a small, but reliable, implementation of a full APL.

See the section: "APL on iPad" for details on what had to be done to put APL on the iPad.

I've built a small sAPL workspace, as a proof-of-concept, that accepts the weights, bias values, and structure of a trivial Xor2 (boolean exclusive-or) neural network, trained using Xerion, which can be activated (ie. run), on the iPad.  This has potential applications, as it would allow a complex network to be trained on a research machine, and then the network's weights and structure can be transfered to the iPad so that evolving, real-time scenarios can be entered on the fly, by someone who wants to query what the trained network's "thinks" of a possible data-scenario.  It's a simple approach, but might be useful.  An example of the simple Xor2 network being activated is shown to the right.

GEMESYS Apps for Android - on the Google Play Store:

gDOSbox has over 50,000 downloads on Google Play Store

The following GEMESYS Android Apps are available on the Google Play Store:

gDOSbox  -  This is a full-featured implementation of the DOSbox-0.74 open-source DOS emulator for Android.  It was developed for Android version 4 (KitKat series), and was recently upgraded to work on Android 5 series (and above) devices.  Recent changes by Google to their keyboard have caused issues on some devices, so we strongly recommend the "Hacker's Keyboard", by Klaus Weidner. 

Download "Hacker's Keyboard" from the Google Play Store, then use the Settings icon, scroll to "Language and Input", and select/invoke the "Hacker's Keyboard".  Then, in the "Default Keyboard" option, choose the "Hacker's Keyboard" as your Default Keyboard.  The Google keyboard attempts to hijack *all* user input, and damages the gDOSbox interface routines.

gDOSbox is a full DOS implementation, with corrected math routines, which allows DOS .exe files to be run on an Android tablet. 

GNUplot37 - A version of the GNUplot graph generation tool.  Allows data to be quickly plotted in two and three dimensions, as well as supporting math processing, and curve-fitting to data, and displaying the result.  Try it with:  "plot sin(x)" to see a sign wave.  Then load the demo (hundreds of examples) with "load 'all.dem' ".   To clear the screen, (if using an on-screen keyboard), use "!cls", and use "!dir /p" to review all the GNUplot examples available.

sAPL      -    The original IP Sharp 32-bit APL, which runs in an emulated IBM 360/75 environment as series of .exe files, orginally released to run on IBM P/C's, and them made into a freeware product by IP Sharp, to encourgage APL usage education. APL characters are generated by ALT-key (eg. ALT-L creates the APL quad character, ALT-[ creates the assignment operator, etc.), so the Hacker's Keyboard is required.

APLSE    -   The STSC APL freeware product, directly downloadable from the PlayStore.  (You do not need to install gDOSbox, it is loaded first).  This is an excellent small-footprint APL, which has full graphics support.  It is reliable, and was released as a freeware product to encourage and assist APL education.  Like sAPL, the APL characters are created using ALT sequences, so ALT-[, for example, is the assignment operator.  The "Hacker's Keyboard" is required.

TryAPL2  -   The IBM full featured "TryAPL2" product, which allows a subset of early APL2 to be run on a P/C.  This is a working APL, which includes IBM's variant of the enclosed-array extensions.  APL characters are generated with shift-letter sequences, so gKeyboard can be used with this APL.

WatAPL  -    The original Watcom APL, circa early 1980's.   This was recovered of of an original Watcom APL System floppy diskette, and dates from 1984.  It can be used with the gKeyboard, as the APL characters are generated with Shift-key sequences.

gKeyboard - A basic keyboard, with the APL characters shown on keytops.  Useful for TryAPL2 and WatAPL, and for learning the location of APL characters on the keyboard.

All GEMESYS software is freeware for educational purposes, and contains *no* advertising or in-app usage monitoring or tracking.

The seven GEMESYS apps for Android. No *root* access is required to run any of them!

Examples - iPad/Samsung Tab-A as "AI-Helper" platform, Xerion Xor2 Example, TensorFlow Linear Regression Example

The freeware APL,  APLSE, can be run on the iPad, using appropriate emulation. As an example,  I calculate and generate a graphic of the Logistic Equation phase-space, as a fractal example.  For those who study or work with fractals and Chaos Theory, the "Tent Map" is well known.  That was my first example.

I also have GNUplot37 running on the iPad and it is available from the Google Playstore as an Android app (no adverts, no in-app monitoring, no scams) , and it can be used to visualize a variety of numeric datasets.  Three examples are shown below (all running on my customized, jailbroken iPad, which once jailbroken, functions as an effective Linux/Unix tablet computer.)

The electrostatic field display (see the hundreds of tiny, pointing vectors?), is an example from the GNUplot37 demo programs.  It takes about 12 minutes to run on the iPad, but the information it conveys is impressive.

As straightforward economic series - London Gold price 10:30 AM fixing, daily-data, from 1965 to 2016 is shown.  If you look at long duration, accurate price-series, you can see the mechanism of market dynamics fairly clearly.    The boom-bust sequences in the spot gold market are typical of *all* financial markets.  That is why the attempts by American and European legislators to over-protect the financial system are deeply misguided.  Markets *require* the freedom to bankrupt foolish people who mindlessly follow trends, and enrich those who deploy risk-capital in places where real risk is present.  Risk needs to be recognized as a very real part of how markets do their job.  Remove risk, and you remove the effective, allocative intelligence of market behaviour.  Political people are often quite unable to grasp this simple truth.  Prices have to *move* and sometimes, move *a lot*, in order to do their job correctly.  Blaming markets for bad outcomes, is as unwise as blaming oxygen for causing a fire.

The last iPad display example shown is 3-d graphic showing a surface generated by a trigonometric function, again GNUPlot37 on my hacked iPad. 

My Vision for the AI-Augmenter (or AI-Helper..)

My vision for the AI-Augmenter (or AI-Helper), involves having a series of well-trained neural networks on a tablet device, and being able to interrogate them with current data, and get an "opinion" from them - and possibly display this amalgam of the AI's opinion in a graphic format that a human is comfortable interpreting - perhaps like a cross between the electrostatic field display (a bunch of little pointing vectors), and the 3-d surface, shown in the last example.

Examples of Xerion running the simple Xor2 network on a Linux development box are shown, as is an example of TensorFlow (the Google AI toolset, recently open-sourced), running on my MacBook Pro.  I find working on the MacBook Pro annoying and irritating, and just getting Python to successfully load and access all the libraries needed to run TensorFlow was more work than porting Xerion to a modern Linux, and getting a clean compile from the source.  I had to down-convert Tcl/Tk from 8.5 back to 7.6 and such, but that was not a huge hardship or difficult exercise.  The MacBook Pro hardware is very fine, but the Apple software is carefully designed to aggressively benefit Apple, regardless of the grief it causes independent developers.

Given that Microsoft was accused of being a "monopoly", and faced lawsuits for simply including a browser in its Windows O/S, I remain astonished by the extensive, and unchallenged use of monopolistic strategies that Apple gets away with.  They have restrictive dealer pricing, a "you-can-only-do-your-business-through-our-company store" policy that is a classic strategy of a monopolist, and they want additional cash-payments just to access development tools that are required to write computer programs that are to run on other Apple hardware.   In the 1970's, when IBM attached similar restrictions to their mainframe machines, they were successfully prosecuted by the US Justice Department for monopolistic, anti-competitive behaviour.   I like Apple hardware (which is built off-shore), but the code inside iOS that initiates the "Killed: 9" response when I attempt to run a gcc-compiled C-program, seems more like a monopolist's strategy, than it does a legitimate attempt to protect the system integrity. (See the "GNU gcc & Lynx" section, top line of this site to see what I am referring to.)

Very recently,  Google has annouced it will offer (as open-source), something called "TensorFlow-Lite", which will allow a subset of TensorFlow to operate on a tablet.  This is a very wise idea, and typical of the cleverness the Google folks demonstrate.  The most effective place for an AI tool is right in the hands of the client.

And this is key:  It has to be *unique*.   If AI is to have any benefit for me - especially in a complex, dangerous, tactical situation - it will have to offer something unique that only I have - it must offer me an *edge* of some sort. It need only be a tiny edge (as most are), but it will be the capacity of AI-Helper tools to offer that custom edge, that will make them quickly indespensible.  Once your "AI-Helper" gets understood to be offering you a real, actionable advantage - it will quickly become essential - like a telephone was to a stock-broker of old, or an automatic assault rifle is to a soldier in the battlefield.

The three iPad images below, were made with GNUplot37, which runs on the jailbroken iPad, under the DOSbox port, called "DOSPad-Gsys". The old Ver. 1.0 iPad can be a fully-functional, and useful computer, once the Apple iOS restrictions are bypassed.  The field-lines display is particularly interesting, as it requires substantial floating-point math calculations to create.

London 10:30 AM Spot Gold Price - 1965 to 2016, rendered on iPad, using GNUPlot37, running under DOSpad Gsys.

Example of Surface Plot - 3-D, using GNUplot37 - with contours, and accurate math processing.

Xerion/UTS running on Linux, showing Xor2 example with Unit & Link Display. Background shows Xerhack, a visualization tool built using Tcl/Tk-Canvas.

Left side is Xerion on Linux, right side is Actnet function in sAPL on iPad, with same network weights. Example training cases produce same network output, both platforms.

Prototype of an Augmenter-AI. Do the kind of data-science you need to do to actually make money, and run it on an Android tablet you can carry in your pocket. The example here is Samsung Tab-A, running Android 6.01 and gDOSbox running APLSE. Your counterparties are *all* using AI technology, so each fellow in the field (or the barn) had better have access to it as well.

Here is the Probability Calculator running on the jailbroken iPad. This shows an estimated probability density function for a possible trade with a 20-day duration. The underlying market database can be migrated to the iPad from the desktop box via secure copy (Linux utility "scp", given that one has Cygwin tools to support "ssh" (secure shell) on the Window's box that maintains the data. ). The idea, of course, is to have a series of neural networks watching all the data in something close to real time, and migrating information like this to the tablet, where visualization can be used to sanity-check the network's recommendations, before pulling the trigger on any given trade.

Wine-2.0.1.tar.xz checksums (MD5 and Sha256). I've just downloaded the stable Wine 2.0.1 code, and have now migrated my Time Series Manager to Linux - currently Fedora and CentOS.

Windows .EXE's for TSM and Gnuplot, running on CentOS 6.6 (Linux kernel 2.6.32-504.el6.i686), using a Pentium 4 (2.40 GHz) cpu, with only 2.0 GiB memory. I built this box just as an experiment (old 32-bit processor), but it runs so well, I can run a WEBrick Rails web-server as well as the old analytic stuff, and it is still snappy quick.