Tuesday, June 04, 2024

Software Development As Reinforcement Learning



Imagine there was some way to discipline software development with a singular metric so objective that no matter how many billions you had to hire Developers Developers Developers even Steve Ballmer could produce higher quality software than anything imagined by Linus Torvalds in his wildest dreams of Linux kernel quality.

There are two types of readers of the above sentence:

  1. Huh? Of course spending billions hiring Developers Developers Developers can result in higher quality software than an open source project like Linux!
  2. That's almost inconceivable since the kinds of "Developers" attracted to huge steaming piles of cash are going to elbow out of the way anyone remotely resembling someone I want anywhere near software I'm going to rely on.

Only the second type has any hope of comprehending a revolution in software development so simple as to beggar the imagination:

Reward software developers that reduce the size of a bootable image that installs the software and passes the test suite.

This might be called Software Development As Reinforcement Learning (henceforth SDARL), but don't get hung up on replacing human programmers with LLMs or some such. It works regardless of the form(s) of intelligence acting as "developers". To be clear, there is a connection to the the foundation of artificial intelligence in Algorithmic Information Theory. The metric for the benchmark is called the Algorithmic Information Criterion for model selection (henceforth AIC). AIC is the most principled information criterion for model selection. AIC formalizes Ockham's Razor.

Before we go further into this revolutionary idea, let's ground it in an exemplar:

The Linux Kernel

The aforementioned "test suite" is, for the Linux kernel, produced by The Linux Test Project.

With that context, here's the FAQ:

Q: Are you talking about the size of the bootable ISO image that installs Linux?

A: Not likely. The bootable image might expand into something like deployable image of Linux but it would be limited to the kernel only.

Q: Of what value is reducing the size of the kernel?

A: It reduces both the attack surface and the argument surface.

Q: What is the "attack surface"?

A: The number of ways that an adversary may exploit complexity in a system.

Q: What is the "argument surface"?

A: The time spent in arguing over design decisions because humans are using non-standard terms for the same concepts.

Q: Doesn't favoring space over time in kernel installation ignore the high cost of time compared to the low cost of memory, storage and bandwidth?

A: You're thinking of deployable images again -- not the reward criterion for a kernel candidate.

Q: What is the difference between a deployable image and a kernel candidate for SDARL?

A: The deployable image will likely be a partially-expanded version of the kernel candidate.

Q: What is to prevent the "winning" Linux kernel from being a minified version of the Haskell Operating System that no one can read but the authors.

A: Open source means the original "source" must be available along with the minimizer for source code review.

Q: What is to keep such an interpretive operating system from winning when it would be so slow as to be worthless?

A: The same thing that keeps any slow software from being accepted: Testing. In the case of the Linux kernel, we're talking growfiles, doio, iogen, etc. tests.

Q: Where would the money come from to reward winners?

A: What is the value of quality software? In the case of the Linux Test Project the contributors are "SUSE, Red Hat, Fujitsu, IBM, Cisco, Oracle and others." In fact, the value of quality software is up there with the value of high quality science: trillions of dollars a year.

Q: Are quality developers really motivated by money?

A: Exceptions, such as Fields Medalist Grigori Perelman's famous refusal of money, are "exceptions" because generally people really are motivated by money. That includes gifted software developers as well as gifted mathematicians. Artificial intelligence software generators are another matter but the principle is the same: those who develop those generative agents are, themselves, subject to reinforcement learning.

Q: Are you saying all software can be funded in this manner?

A: At least the most critical software -- such as the Linux kernel -- if not entire desktop distributions -- such as Ubuntu.

Q: Won't there be knock-down-drag-out fights between different programming language "religions"?

A: Language wars have a long history extending into the natural sciences, mathematics and philosophy. Indeed, SDARL would meta-discpline those disciplines to the extent they form the intellectual substrate for software models of reality.

Q: Where can I sign up?

A: Get involved with The Linux Test Project and figure out how to help them to see the SDARL light.

Monday, April 15, 2024

The Scientific Validity of the Algorithmic Information Criterion for Model Selection

Thesis

The Algorithmic Information Criterion (AIC), based on Kolmogorov Complexity, is a valid and principled criterion for model selection in the natural sciences, despite objections regarding the choice of the underlying Turing Machine.

Supporting Arguments

  1. Universality of NiNOR Gates: A finite, directed cyclic graph (DCG) of N-input NOR (NiNOR) gates can, given a starting state, perform any computation that a Turing Machine with finite memory can execute. This universality suggests that the choice of a specific computational model can be principled, akin to choosing an axiomatic basis in mathematics.
  2. Minimization of NiNOR Complexity: By creating an instruction set emulation program that simulates a directed cyclic graph of NiNOR gates (which in turn provides the instruction set), and another program written in that instruction set to output a given dataset, a parameter-free definition NiNOR Complexity is established: The minimum length of these two programs. Note that since both programs are written in the same non-arbitrary instruction set, this factors out any arbitrary Universal Turing machine that might be chosen to emulate the instruction set emulator.
  3. Philosophical Consistency with Scientific Methods: By removing an "arbitrary" parameter from Kolmogorov Complexity's definition of Algorithmic Information, Solomonoff's proofs can be revisited without any parameter any more subjective than the proof of NOR gate universality. All that must be given up is the notion of infinities. Moreover, this revised definition of an Algorithmic Information Criterion for model selection retains its relevance to the dynamical systems of the natural world -- a decisive advantage over statistical information criteria.

Counterarguments

  • Claim of Arbitrary Turing Machine Choice: Critics argue that the choice of Turing Machine in determining Kolmogorov Complexity is arbitrary because one can tailor a machine's instruction set to trivially minimize the complexity of any given dataset.
  • Reductio ad Absurdum on Turing Machine Instruction Set: Critics might use a reductio ad absurdum approach by proposing a Turing Machine whose instruction set includes a command that outputs the entire dataset in a single instruction, thus falsely reducing the complexity to an absurdly low value.

Rebuttals

  1. Non-Arbitrariness in Computational Model Choice: The choice of a particular model and its instruction set reflects underlying computational principles (e.g., the universality of NiNOR gates) and is not more arbitrary than foundational decisions in other scientific and mathematical fields.
  2. Logical Flaw in Critics’ Argument: The critic’s approach to arbitrarily defining a Turing Machine’s instruction set to minimize complexity does not properly consider the complexity of the instruction set itself in which the dataset is encoded. By focusing on trivializing the output instruction, they overlook the broader implications of the instruction set’s design, which fundamentally contributes to the system's overall complexity. This misrepresents the principle of Kolmogorov Complexity, which aims to measure the minimal description length of the dataset in a way that genuinely reflects its informational content, rather than artificially minimizing it through tailored instruction sets.

Conclusion

The critique against the Algorithmic Information Criterion (AIC) using Kolmogorov Complexity based on the arbitrary choice of Turing Machine does not withstand scrutiny. Proper understanding and application of AIC demonstrate that it robustly captures the essential complexity of datasets consistent with Solomonoff's proofs. This complexity includes the design of the instruction set itself, which should not be arbitrarily minimized to misrepresent the dataset's intrinsic informational content. Thus, the AIC remains a principled and effective method for model selection in the natural sciences. Indeed, prior criticisms based on the supposed subjective choice of UTM are considered not only specious but harmful to the scientific enteprise.

Thursday, January 18, 2024

Wolfram's CODATA Physical Constants Names & Function To Retrieve Them With Their Uncertainties

 (* List all names of CODATA physical constants so they are available for CTRL-F string search to find the canonical name used to access their properties such as uncertainty. *)

EntityValue["PhysicalConstant",{"CanonicalName"}]

Out[17]= {{AccelerationAssociatedWithCosmologicalExpansionRate},{AlphaParticleMass},{AmpereConstant},{AngstromStar},{AnimalMassScale},{AstronomicalUnit},{AtomicMassConstant},{AtomicMassConstantEnergyEquivalent},{AtomicPolarizabilityEquilibriumInternuclearDistanceProportionalityConstant},{AtomicSpecificHeatConstant},{AtomicUnitOfElectricChargeDensity},{AtomicUnitOfElectricConductance},{AtomicUnitOfElectricCurrent},{AtomicUnitOfElectricFieldStrength},{AtomicUnitOfElectricFieldStrengthGradient},{AtomicUnitOfElectricFirstHyperpolarizability},{AtomicUnitOfElectricPermittivity},{AtomicUnitOfElectricPolarizability},{AtomicUnitOfElectricPotential},{AtomicUnitOfElectricQuadrupoleMoment},{AtomicUnitOfElectricSecondHyperpolarizability},{AtomicUnitOfForce},{AtomicUnitOfFrequency},{AtomicUnitOfMagneticFlux},{AtomicUnitOfMagneticFluxDensity},{AtomicUnitOfMagneticMoment},{AtomicUnitOfMagnetizability},{AtomicUnitOfMomentum},{AtomicUnitOfPressure},{AtomicUnitOfTemperature},{AtomicUnitOfTime},{AtomicUnitOfVelocity},{AtomStructuralConstant},{AvogadroConstant},{AvogadroNumber},{BiotSavartConstant},{BlackHoleConjecturedFinalMass},{BlackHoleCriticalTemperature},{BohrMagneton},{BohrQuadrupoleMagneton},{BohrRadius},{BoltzmannConstant},{Carbon12AtomicMass},{Carbon12MolarMass},{Carbon12RelativeAtomicMass},{CeresSunMassRatio},{Cesium133HyperfineSplittingFrequency},{CirculationQuantum},{ClassicalElectronRadius},{ClassicalProtonRadius},{ConductanceQuantum},{CosmicMicrowaveBackgroundTemperature},{CosmologicalConstant},{CosmologicalNaturalLength},{CosmologicalNaturalMass},{CosmologicalNaturalTime},{CosmologicalQuantumPointEnergy},{CosmologicalQuantumPointLength},{CosmologicalQuantumPointMass},{CosmologicalRadius},{CoulombConstant},{CurieConstantSquareRootMultiplier},{DeuteronGFactor},{DeuteronMagneticMoment},{DeuteronMass},{DimensionlessHubbleParameter},{DiracMonopoleMagneticCharge},{DoubledCirculationQuantum},{EarthAuthalicRadius},{EarthEquatorialRadius},{EarthFirstEccentricity},{EarthFirstFlattening},{EarthGeomagneticReferenceRadius},{EarthInverseFlattening},{EarthLinearEccentricity},{EarthMass},{EarthMeanRadius},{EarthMeridionalRadiusOfCurvature},{EarthMoonMassRatio},{EarthPolarRadius},{EarthPolarRadiusOfCurvature},{EarthRectifyingRadius},{EarthRotationalAngularVelocity},{EarthSecondDynamicFormFactor},{EarthSecondEccentricity},{EarthSecondFlattening},{EarthThirdEccentricity},{EarthThirdFlattening},{EarthVolumetricRadius},{EddingtonConstant4},{EddingtonNumber},{EinsteinConstantSpeedOfLightSquared},{EinsteinConstantSpeedOfLightToTheFourth},{ElectricBohrDipoleMoment},{ElectricBohrQuadrupoleMoment},{ElectricConstant},{ElectromagneticCouplingConstant},{ElectromagneticInteractionStrengthAtZBosonMass},{ElectronAbsoluteMass},{ElectronChargeMassRatio},{ElectronComptonFrequency},{ElectronComptonWavelength},{ElectronGFactor},{ElectronGFactorAbsoluteValue},{ElectronGyromagneticRatio},{ElectronMagneticMoment},{ElectronMagneticMomentAnomaly},{ElectronMass},{ElectronMolarMass},{ElectronProtonElectricGravitationalForceRatio},{ElectronProtonMassRatio},{ElectronReducedGyromagneticRatio},{ElectronRelativeAtomicMass},{ElectronSchroedingerConstant},{ElectronSchroedingerSquareRootConstant},{ElectronWaveMass},{ElectroweakGravityCoupledMaximumCutoffEnergyScale},{ElementaryCharge},{FaradayConstant},{FermiCouplingConstant},{FineStructureConstant},{FirstEddingtonNumber},{FirstFowlerNordheimConstant},{FirstRadiationConstant},{FirstRadiationConstantForSpectralRadiance},{FixedNucleusAtomSchroedingerConstant},{FundamentalKinematicViscosity},{GalacticMassScale},{GalacticUnit},{GaussianGravitationalConstant},{GeneralRelativityMaximalForce},{GeneralRelativityMaximalPower},{GeometryActionQuantum},{GravitationalConstant},{GravitationalConstantPerSpeedOfLightSquared},{GravitationalCouplingConstantElectronElectron},{GravitationalCouplingConstantElectronProton},{GravitationalFineStructureConstant},{GravitationalPermeability},{HartreeEnergy},{HBarCProduct},{HCProduct},{HelionGFactor},{HertzKelvinRelationship},{HiggsBosonMass},{HiggsVacuumExpectationValue},{HorizonMassScaleAtEquality},{HubbleLength},{HubbleParameter},{HubbleTime},{HubbleVolume},{HydrogenAtomGroundStateLinearChargeDensity},{HydrogenAtomSchroedingerConstant},{IdealGasMolarVolume},{ImagePotentialEnergyConstant},{InternationalAnnealedCopperStandard},{InvariantSlowness},{InverseConductanceQuantum},{InverseFineStructureConstant},{InverseSquareRootElectricConstant},{JeansMassScaleAtEquality},{JordanNumber1},{JordanNumber2},{JordanNumber3},{JosephsonConstant},{JovianMassParameter},{JupiterEquatorialRadius},{JupiterPolarRadius},{KovtunSonStarinetBound},{LifeFormChemicalConversionFactor},{LorentzConstant},{LorentzUnit},{LorenzNumber},{LoschmidtConstant},{MagneticConstant},{MagneticCoulombConstant},{MagneticCouplingConstant},{MagneticFineStructureConstant},{MagneticFluxQuantum},{MaximumAtomicNumber},{MaximumNeutronStarMassScale},{MaximumStarRadiusSchwarzschildRadiusRatio},{MeanSolarIrradiance},{MillimagneticFluxQuantum},{MinimumNeutronStarMassScale},{MinimumOpacityLimitedFragmentMassScale},{MinimumStellarMassScale},{MolarGasConstant},{MolarMassConstant},{MolarPlanckConstant},{MONDConstant},{MONDLength},{MonochromaticRadiation540THzLuminousEfficacy},{MoonEarthMassRatio},{MuonComptonWavelength},{MuonGFactor},{MuonMagneticMoment},{MuonMass},{NaturalUnitOfEnergy},{NaturalUnitOfLength},{NaturalUnitOfMomentum},{NaturalUnitOfTime},{NeutronComptonWavelength},{NeutronGFactor},{NeutronGyromagneticRatio},{NeutronMagneticMoment},{NeutronMass},{NeutronReducedGyromagneticRatio},{NeutronStarMaximumAngularMomentum},{NonlinearQEDEffectsOnsetElectromagneticEnergyDensity},{NuclearMagneton},{ObservableUniverseMaximumAngularMomentum},{PallasSunMassRatio},{Parsecs},{PermeabilityRationalizationConstant},{PermittivityRationalizationConstant},{PhotonPhotonCrossSectionUpperLimit},{PiPlusOrMinusMass},{PiZeroMass},{PlanckArea},{PlanckConstant},{PlanckFrequency},{PlanckLength},{PlanckMass},{PlanckMassDensity},{PlanckTemperature},{PlanckTime},{PlanckVolume},{PlanetaryMassScale},{ProtonComptonWavelength},{ProtonElectronMassRatio},{ProtonElementaryViscosity},{ProtonGFactor},{ProtonGyromagneticRatio},{ProtonMagneticMoment},{ProtonMass},{ProtonMolarMass},{ProtonProtonElectricGravitationalForceRatio},{ProtonReducedGyromagneticRatio},{ProtonRelativeAtomicMass},{ProtonRMSChargeRadius},{QCDScale},{QEDEulerHeisenbergLagrangianPrefactor},{QuadraticHiggsCoefficient},{QuantizedHallConductance},{QuantumChannelThermalConductanceConstant},{QuantumJumpCharacteristicDecayTime},{QuarticHiggsCoefficient},{RadiationConstant},{ReducedBohrRadius},{ReducedFermiConstant},{ReducedPlanckConstant},{ReducedPlanckMass},{RelativisticBohrRadius},{RelativisticReducedPlanckMass},{RichardsonDushmanConstant},{RydbergConstant},{RydbergConstantHydrogen},{RydbergEnergy},{RydbergFrequency},{RydbergWavelength},{SackurTetrodeConstant},{SchottkyConstant},{SchwingerElectricFieldStrength},{SchwingerMagneticFluxDensity},{SecondEddingtonNumber},{SecondFowlerNordheimConstant},{SecondRadiationConstant},{ShemiZadehNumber},{ShemiZadehNumberB0},{SignedElementaryCharge},{Silicon220LatticeSpacing},{SiliconMolarVolume},{SolarConstant},{SolarEffectiveTemperature},{SolarLuminosity},{SolarMass},{SolarMassParameter},{SolarRadius},{SolarSchwarzschildRadius},{SommerfeldSupplyDensityConstant},{SpacetimeAtomAvogadroNumber},{SpeedOfLight},{SpeedOfSound},{SpeedOfSoundCondensedPhaseUpperBound},{StandardAccelerationOfGravity},{StandardAtmosphere},{StandardConcentration},{StandardMolality},{StandardPressure},{StandardStatePressure},{StefanBoltzmannConstant},{StellarMassScale},{StrongCouplingConstant},{StrongInteractionStrengthAtZBosonMass},{SunEarthMassRatio},{SunErisMassRatio},{SunJupiterMassRatio},{SunMarsMassRatio},{SunMercuryMassRatio},{SunNeptuneMassRatio},{SunPlutoMassRatio},{SunSaturnMassRatio},{SunUranusMassRatio},{SunVenusMassRatio},{TauComptonWavelength},{TauMass},{TerrestrialMassParameter},{ThompsonLampardCalculableCapacitorCapacitance},{ThomsonCrossSection},{TotalSolarIrradiance},{TwoSpeciesApproximateSingleDensityNonNeutralFailedWhiteDwarfRadius},{UniversalPhotogalvanicConstant},{UniverseAge},{UniverseAtomCount},{UniverseBaryonCount},{UniverseBaryonicMatterDensityParameter},{UniverseBaryonicMatterMassDensity},{UniverseBindingEnergyMassEnergyEquivalentRatio},{UniverseColdDarkMatterDensityParameter},{UniverseColdDarkMatterMass},{UniverseColdDarkMatterMassDensity},{UniverseCriticalMassDensity},{UniverseDarkEnergyDensityParameter},{UniverseDarkEnergyMassDensity},{UniverseDensityParameter},{UniverseDiameter},{UniverseDimensionlessBaryonPhotonRatio},{UniverseElectronCount},{UniverseEntropy},{UniverseMass},{UniverseMassDensity},{UniverseNeutrinoDensityParameter},{UniverseNeutronCount},{UniverseNucleonCount},{UniverseParticleCount},{UniversePressurelessMatterDensityParameter},{UniversePressurelessMatterMassDensity},{UniverseProtonCount},{UniverseRadius},{UniverseVacuumEnergyDensity},{UniverseVacuumEnergyDensityEnergyScale},{UniverseVolume},{VacuumDominatedUniverseMassScale},{VacuumImpedance},{VestaSunMassRatio},{VonKlitzingConstant},{WaterFreezingTemperature},{WaterTriplePointTemperature},{WBosonMass},{WeakHyperchargeCouplingConstant},{WeakInteractionStrengthAtZBosonMass},{WeakIsospinCouplingConstant},{WeakMixingAngleConstant},{WeinbergAngle},{WienFrequencyDisplacementLawConstant},{WienWavelengthDisplacementLawConstant},{YukawaBottomQuarkCouplingConstant},{YukawaCharmQuarkCouplingConstant},{YukawaDownQuarkCouplingConstant},{YukawaElectronCouplingConstant},{YukawaMuonCouplingConstant},{YukawaStrangeQuarkCouplingConstant},{YukawaTauonCouplingConstant},{YukawaTopQuarkCouplingConstant},{YukawaUpQuarkCouplingConstant},{ZBosonMass}}

In[18]:= (* Look up CODATA quantity given canonical name and return its uncertain value. *)

codata[canonicalName_]:= Around@@Entity["PhysicalConstant",canonicalName][{"Value","StandardUncertainty"}]

In[20]:= codata["ProtonMass"]

Out[20]= (1.672621923\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](7\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]5\[NegativeVeryThinSpace])*10^-27)kg

Thursday, October 19, 2023

NiNOR Complexity

I'm not going to go into a lot of explanation about why the following is important because, well, it isn't really that important except that there are a lot of pedantic twits running around obstructing a major advance in the philosophy of science by, like Theodoric of York, saying "NAHHH..." to the Algorithmic Information Criterion for causal model selection as the most principled information criterion we have in the natural sciences.  Their "NAHHH..." takes the form of dismissing the Algorithmic Information Criterion on the grounds that its measure of information, Kolmogorov Complexity, depends on the choice of Turing Machine which, they claim, is "arbitrary".  This means they think they can choose the instruction set of their Turing Machine such that a single instruction outputs the entire set of observations under consideration.  Thus, their supposed reductio ad absurdum is to claim that the Kolmogorov Complexity of any given set of data may be measured as the bit length of a single instruction of their instruction set -- ideally one bit.

No, it isn't and no, they can't, and they know deep down that they're full of pedantic BS.  But, ok, I'll play their pedantic game with them, not that they'll listen since, of course Theodoric of York wants to get back to his blood letting cures of his dying patients and has no time for such nonsense as "the scientific method":

It is known that a finite, directed graph of N-input NOR (henceforth NiNOR) gates can perform any computation that a Turing Machine with finite memory can. So...

TLDR: Write a program, in a chosen instruction set, that simulates the directed graph of NiNOR gates that provides the chosen instruction set, including the memory that holds the program. Use that instruction set to write a program that, when executed by the instruction set simulation program, outputs the given dataset. The minimum sum of the length of these two programs is the NiNOR-Complexity.

Let me expand just a little bit for those left wondering what I'm talking about:

The choice of Turing machine is not arbitrary any more than is the choice of axiomatic basis for arithmetic arbitrary.  The natural sciences don't generally bother with "debunking" Ockham's Razor over debates regarding the choice of arithmetic's axioms.  But, *sigh*, OK.  If I need to spell this "philosophical nuisance" out in detail, let's descend a level of abstraction from computation to the more primitive concept of a NOR gate -- or more specifically at Directed Cyclic Graph of N-input NOR gates.  Few would regard this as "arbitrary" since the choice of NAND is a mere isomorphic rotation in logic and is similarly "universal" -- and _minimal_. 

It is known that a finite, directed graph of N-input NOR (henceforth NiNOR) gates can perform any computation that a Turing Machine with finite memory can. So...

Chose an emulation program, in a chosen instruction set, that simulates a chosen directed cyclic graph of NiNOR gates that provides the instruction set.  This emulation program will have a parameter that is the size of the memory (ie: number of times to replicate the memory cells) such that it can hold the emulator.  We'll permit it an integer that is not specified that is the amount of additional memory to contain the program to be executed by the emulator.  Use that instruction set to chose a executable archive program of a given set of observations encoded as bits. The choices that minimize the size of emulation program and executable archive program is the NiNOR-Complexity. Now we still have a free parameter but is it a choice of Turing machine?  No.  It's an integer, the size of which goes up only as the log2 of the amount of memory required to expand the executable archive.

We're still left with the "uncomputability" issue but that's just one more philosophical nuisance since what it really means is that we can't prove that a given bit string is the "NiNOR Complexity" of a given dataset.  Progress in the natural science is not held back by such concerns over proving that a given theory is the ultimate theory of nature either.





Saturday, October 01, 2022

Militia Money

Militia Money is Property Money defining its sovereigns -- "those who place their flesh, blood and bone between chaos and civilization" -- as those who are registered for the draft.

This definition overcomes a number of barriers to putting Property Money into practice:

  • It operationally defines who sovereigns are thereby reducing rhetorical attacks by reducing the "argument surface".
  • Draft registration is a legally recognized class distinction.
  • This class pertains specifically to "those who place their flesh, blood and bone between chaos and civilization" .
  • The mandatory nature of draft registration implies that society owes a debt to this class.

Moreover, because the draft is currently restricted to men, Militia Money ameliorates the catastrophe befalling the developed world whose economies outbid young men for the fertile years of economically valued women -- thereby depleting from the next generation of economically valuable characteristics.

As Militia Money is adopted, it is likely that the existing political entities will, using Israel as an exemplar, attempt to re-impose this catastrophe befalling civilization by expanding the draft to include young women.  This disingenuous tactic will backfire for 3 reasons:

  1. Israel's government did not make the mistake of subverting the evolutionary psychology of its young women by rendering, in their mind, its young men manifestly impotent to defend their territory against the mass immigration of military aged men who would be viewed as de facto conquerors by the primitive emotional brain centers of both men and women.
  2. Neocons and most members of American-Israel Public Affairs Committee (AIPAC) both played conspicuous roles in encouraging the West to make this mistake and both are Jewish-identified movements/organizations.
  3. Awareness of both of the above will expand along with Militia Money for the simple reason that the evolutionary psychology of territory will begin to re-emerge in the waking consciousness of young men, thereby remediating their self-esteem and freeing their minds from taboos of the post-WW II era.
The primary barrier to adoption of Property Money, hence Militia Money, will be the inability of property owners to recognize that property titles are founded on and granted by sovereign force.  In discussing Militia Money with property owners, the best way of helping them recognize this origin of entitlement is to ask them whether they would prefer that their tax revenue go to politicians or to young men who are registered for the draft.  Although it is true that most property owners -- particularly employers -- will have a low opinion of young men generally, forcing them to compare with politicians may help them.

Obviously, as can be seen in the very wealthy and among employers who contribute to the Republican establishment candidates that are soft on immigration, some of these property owners will not be swayed.  Moreover, they will likely recognize that Militia Money is a threat to them since they have sold out their people and their nation and will likely be seen as the traitors they are.  But at least you will have given them a chance to escape that fate.

Other property owners will recognize the business opportunities represented by the privatization of all functions of government.  These property owners will be among the new Founders.

Friday, October 29, 2021

Networking The American Pioneer

Below is something I wrote in 1982.  With that prediction proved correct, please consider supporting what I'm doing now to address the situation:

There is a tremendous danger that careless promotion of deregulation will be dogmatically (or purposefully) extended to the point that there may form an unregulated monopoly over the information replicated across the nation-wide videotex network, now underdevelopment. If this happens, the prophecies of a despotic, "cashless-society" are quite likely to become a reality. My opinion is that this nightmare will eventually be realized but not before the American pioneers have had a chance to reach each other and organize. I base this hope on the fact that the first people to participate in the videotex network will represent some of the most pioneering of Americans, since videotex is a new "territory".

Why Didn't the Internet Take Off In 1983?

Videotex Networking and the American Pioneer (Score:5, Informative)

by Baldrson ( 78598 ) * on Friday March 02, 2012 @02:24AM (#39217853) Homepage Journal

From the Way Back archives [archive.org].

I wrote the following article during my tenure as the chief architect for the mass-market videotex experiment conducted by AT&T and Knight-Ridder News called "Viewtron" -- a service of the joint-venture company, Viewdata Corporation of America.

As can be sensed in the article, I had encountered some fairly frustrating situations and was about to be told by the corporate authorities that my telecomputing architecture, which would have provided a dynamically downloaded Forth graphics protocol in 1983 evolving into a distributed Smalltalk-like environment beginning around 1985, would be abandoned due to a corporate commitment to stick with Tandem Computers as the mainframe vendor -- a choice which I had asserted would not be adequate for my architecture. (At least Postscript survived.) I was subsequently offered the head telecomputing software position at Prodigy by IBM and turned it down when they indicated they would not support my architecture either, due to a committment to limit merchant access to their network to only those who had a special status with the service provider (IBM/CBS/Sears). The distributed Smalltalk system was specifically designed to allow the sort of grassroots commerce now emerging in the world wide web -- particularly as people recognize JavaScript is similar to the Self programming language and the Common Lisp Object System. This wasn't in keeping with IBM's philosophy at that time since they had yet to be humbled by Bill Gates.

My independent attempt at developing this sort of service was squashed by the U.S. government when it provided UUCP/Usenet service to a competitor in San Diego and would not offer me the same subsidy via MILnet -- a network that was not for public access, by law, and which was exclusively for military use. My complaints to DoD investigators resulted in continual "We're looking into it." replies.

Videotex Networking and The American Pioneer

by Jim Bowery (circa 1982)

With the precipitous drop in the price of information technology, computer-based communication has come within the technical and economic reach of the mass-market. The term generally used for this mass-market is "videotex" because it reduces the cost of entry into the home by using the most ubiquitous video display device, the television screen, to deliver its service.

The central importance of this new market is that it brings the capital cost of establishing a publication with nation-wide distribution to within the reach of the mass-market as well. This means that anyone who is a "consumer" of information on this new technology can also be a "producer" of information. The distinction between editorial staff and readership need no longer be a function of who has how much money, but rather, who has the greatest consumer appeal. The last time an event of this magnitude took place was the invention of the offset printer which brought the cost of publication to within the reach of small businesses. That democratization of cultural evolution was protected in our constitution under freedom of the press. Freedom of speech was intended for the masses. In this new technology, the distinction between press and speech is beginning to blur. Some individuals and institutions see this as removing the new media from either of the constitutional protections rather than giving it both. They see a great danger in allowing the uncensored ideas of individuals to spread across the entire nation within seconds at a cost of only a few cents. A direct quote from a person with authority in the management of this new technology: "We view videotex as 'we the institutions' providing 'you the people' with information." I wonder what our founding fathers would have thought of a statement like that.

Mass-media influences cultural evolution in profound ways. Rather that assuming a paternalistic posture, we should be objective about these influences in making policy and technology decisions about the new media. It is important to try and preserve the positive aspects of extant media while eliminating its deficits. On the positive side, mass-media is very effective at eliminating "noise" or totally uninteresting information compared to, say, CB radio. This is accomplished via responsible editorial staffs and market forces. On the negative side, much "signal" or vital information is eliminated along with the noise. A good example of this is the way mass-media attends to relatively temporal things like territorial wars, nuclear arms, economic ills, social stratification ... etc. to the utter exclusion of attending to the underlying cause of these events: our limits to growth. The need for "news" is understandable, but how long should we talk about which shade of yellow Joe's eye is, how his wife and her lover feel about it and whether he will wear sun-glasses out of embarrassment before we start talking about a cure for jaundice?

Mass-media has failed to give appropriate coverage to the most significant and interesting issue facing us because of the close tie between institutional culture and editorial policy. Institutional evolution selects people-oriented people -- individuals with great personal force. These people are consumed with their social orientation to the point that they ignore or cannot understand information not relating in fairly direct ways to politics or the psychological aspects of economics. Since institutional evolution is reflected in who has authority over what, editorial authority eventually reflects the biases of this group. They cannot understand life, except as something that generates politics and "human interest" stories. They may even, at some level of awareness, work to maintain our limits to growth since it places their skills at a premium. In a people-saturated environment (one at its limits to growth) people-oriented people are winners.

Actually, this is an ancient problem that keeps rearing its ugly head in many places in many forms. In my industry its called the "Whiz Kids vs. MBAs" syndrome. Others have termed it "Western Cowboys vs. Eastern Bankers". The list is without end. I prefer to view it as a more stable historical pattern: "Pioneers vs. Feudalists".

Pioneers are skilled at manipulating unpeopled environments to suit their needs whereas feudalists are skilled at manipulating peopled environments to suit their needs. Although, these are not necessarily exclusive traits, people do seem to specialize toward one end or the other simply because both skills require tremendous discipline to master and people have limited time to invest in learning.

Pioneers want to be left alone to do their work and enjoy its fruits. Feudalists say "no man is an island" and feel the pioneer is a "hick" or worse, an escapist. Feudalists view themselves as lords and pioneers as serfs. Pioneers view feudalists as either irrelevant or as some sort of inevitable creeping crud devouring everything in its path. At their best, feudalists represent the stable balance and harmony exhibited by Eastern philosophy. At their worst, feudalists represent the tyrannical predation of pioneers unable to escape domination. At their best, pioneers represent the freedom, diversity and respect for the individual represented by Western philosophy. At their worst, pioneers represent the inefficient, destructive exploitation of virgin environs.

The Atlantic and Pacific Oceans selected pioneers for the New World. The Pioneer is in our cultural and our blood. But now that our frontier resources have vanished, the "creeping crud" of feudalism is catching up with us. This change in perspective is making itself felt in all aspects of our society: big corporations, big government and institutional mass-media. As the disease progresses, we find ourselves looking and behaving more and more like one big company town. Soviet Russia has already succumbed to this disease. The only weapon we have that is truly effective against it is our greatest strength: innovation.

I firmly believe that, except to the extent that they have been silenced by the media's endless barrage of feudalistic values, the American people are pioneers to their core. They are starved to share these values with each other but they cannot because there is no mode of communication that will support their values. Videotex may not be as efficient at replicating and distributing information as broadcast, but it does provide, for the first time in history, a means of removing the editorial monopoly from feudalists and allowing pioneers to share their own values. There will be a battle over this "privilege" (although one would think freedom of the press and speech should be rights). The outcome of this battle of editorial freedom vs. control in videotex may well determine whether or not civilization ends in a war over resources, continues with the American people spear-heading an explosion into the high frontier or, pipe-dream of pipe-dreams, slides into world-wide feudalism hoping to control nuclear arms and "equitably" distribute our dwindling terrestrial resources.

There is a tremendous danger that careless promotion of deregulation will be dogmatically (or purposefully) extended to the point that there may form an unregulated monopoly over the information replicated across the nation-wide videotex network, now underdevelopment. If this happens, the prophecies of a despotic, "cashless-society" are quite likely to become a reality. My opinion is that this nightmare will eventually be realized but not before the American pioneers have had a chance to reach each other and organize. I base this hope on the fact that the first people to participate in the videotex network will represent some of the most pioneering of Americans, since videotex is a new "territory".

The question at hand is this: How do we mold the early videotex environment so that noise is suppressed without limiting the free flow of information between customers?

The first obstacle is, of course, legal. As the knights of U.S. feudalism, corporate lawyers have a penchant for finding ways of stomping out innovation and diversity in any way possible. In the case of videotex, the attempt is to keep feudal control of information by making videotex system ownership imply liability for information transmitted over it. For example, if a libelous communication takes place, corporate lawyers for the plaintiff will bring suit against the carrier rather than the individual responsible for the communication. The rationalizations for this clearly unreasonable and contrived position are quite numerous. Without a common carrier status, the carrier will be treading on virgin ground legally and thus be unprotected by precedent. Indeed, the stakes are high enough that the competitor could easily afford to fabricate an event ideal for the purposes of such a suit. This means the first legal precedent could be in favor of holding the carrier responsible for the communications transmitted over its network, thus forcing (or giving an excuse for) the carrier to inspect, edit and censor all communications except, perhaps, simple person-to-person or "electronic mail". This, in turn, would put editorial control right back in the hands of the feudalists. Potential carriers' own lawyers are already hard at work worrying everyone about such a suit. They would like to win the battle against diversity before it begins. This is unlikely because videotex is still driven by technology and therefore by pioneers.

The question then becomes: How do we best protect against such "legal" tactics? The answer seems to be an early emphasis on secure identification of the source of communications so that there can be no question as to the individual responsible. This would preempt an attempt to hold the carrier liable. Anonymous communications, like Delphi conferencing, could even be supported as long as some individual would be willing to attach his/her name to the communication before distributing it. This would be similar, legally, to a "letters to the editor" column where a writer remains anonymous. Another measure could be to require that only individuals of legal age be allowed to author publishable communications. Yet another measure could be to require anyone who wishes to write and publish information on the network to put in writing, in an agreement separate from the standard customer agreement, that they are liable for any and all communications originating under their name on the network. This would preempt the "stolen password" excuse for holding the carrier liable.

Beyond the secure identification of communication sources, there is the necessity of editorial services. Not everyone is going to want to filter through everything published by everyone on the network. An infrastructure of editorial staffs is that filter. In exchange for their service the editorial staff gets to promote their view of the world and, if they are in enough demand, charge money for access to their list of approved articles. On a videotex network, there is little capital involved in establishing an editorial staff. All that is required is a terminal and a file on the network which may have an intrinsic cost as low as $5/month if it represents a publication with "only" around 100 articles. The rest is up to the customers. If they like a publication, they will read it. If they don't they won't. A customer could ask to see all articles approved by staffs A or B inclusive, or only those articles approved by both A and B, etc. This sort of customer selection could involve as many editorial staffs as desired in any logical combination. An editorial staff could review other editorial staffs as well as individual articles, forming hierarchies to handle the mass of articles that would be submitted every day. This sort of editorial mechanism would not only provide a very efficient way of filtering out poor and questionable communications without inhibiting diversity, it would add a layer of liability for publications that would further insulate carriers from liability and therefore from a monopoly over communications.

In general, anything that acts to filter out bad information and that is not under control of the carrier, acts to prevent the carrier from monopolizing the evolution of ideas on the network.

As a tool for coordinating organizations, a customer-driven videotex communications facility would be just as revolutionary in its impact. In particular, organizations with simple hierarchical structures could automate almost all of their accounting and coordination via a videotex network. In addition to the normal modes of organizational management, new modes will spring up that are impractical outside of an information utility. Perhaps the most important example involves the way individuals are given authority within organizations. Traditional organizations select authority via a top-down, authoritarian system or via a bottom-up democratic system. The authoritarian system is more efficient than the democratic system, but it is also more vulnerable to mistakes and corruption. The democratic system gets harder to maintain the larger it gets. People have a natural limit to the number of people they can effectively associate with. In large representative democracies, such as our government, a national union, etc. virtually no one voting for a candidate knows the candidate personally. This, combined with the event called "election" creates the "campaign" where the virtues of democracy are almost entirely subverted by its vices. A very simple system of selecting representation or proxy exists which eliminates "elections" and thus campaigns, excessive politics and corruption. It is called CAV: "continuous approval voting". It is too expensive to maintain manually, but with a videotex network, it becomes just as cheap as any other system (it may be less expensive).

In CAV, a group of people who associate with each other select a representative from among themselves. Each member has an "approval list" which only they can see and alter. On this list, they give the name of every individual they feel is competent to be their representative. The person whose name appears on the most approval lists is the representative. At any time, a member may change their approval list. That change could put another at the top of the approval heap and therefore force a recall of the previous representative. A hierarchy of such groups could grow to unlimited size, still with no campaigns and everyone evaluating only those who they are in a position to associate with. Of course, thresholds for recall, terms of office and other embellishments may be included to optimize the system for particular purposes. The point is that this represents just one of many new forms of democracy that could change the way privilege and accountability are allocated in our institutions.

The power of this sort of tool will be so profound that the first organizations to take advantage of it will represent an unprecedented political and economic force. As stated earlier, it appears the demography of early customers will favor organizations oriented toward pioneering values. If the development of technology for utilization of nonterrestrial resources continues, it is quite likely that an organization will form to exploit those resources, by-passing government, military and traditional corporate planning. Of course, these institutions won't like this, just as third-world governments tried to tie down nonterrestrial resources with the so-called "Moon Treaty". The ensuing political battle will probably come out in favor of allowing the organization to develop the resources in exchange for some form of taxation.

Professional societies will be able to carry on continuous year-round conferences. The time for feed-back determines the rate of advance in most advanced technologies. Videotex can reduce that feed-back time from months to minutes. Again, societies structured appropriately will be able to take maximum advantage of this sort of system. This means only new or flexible old societies will receive the full force of this technology's benefits. A society which places internal politics before its primary purpose will be by-passed. Once again, pioneer values will be promoted.

The conferencing system would probably be organized in a hierarchy of discussions. Everyone would see the top level discussion but only those at the top could contribute to it directly. At the bottom levels, individuals could comment and if received with enough credulity by higher level members, their comment could be raised to a higher level in the conference, thus reaching a number of people increasing geometrically with each level. The key to the success of such a hierarchical conference, as in any conference, is the way "speakers" are selected, or the credulity factor mentioned above. If this sort of conferencing system combines with the CAV system mentioned above, the resulting conferences will be even more interesting.

Currently, almost half a researcher's time is spent searching through hierarchies of reference indexes, or in duplicating efforts that could be avoided if they did such searches. If professional conferences and articles were submitted and published on a videotex network, this time would be reduced to insignificance. Furthermore, the interpersonal communications would allow a researcher to ask an author questions about his publication and get answers, potentially within seconds, without the inconvenience or imposition of a phone call.

(to be continued)

Re:Videotex Networking and the American Pioneer (Score:2)

by Baldrson ( 78598 ) * on Friday March 02, 2012 @05:16AM (#39218593) Homepage Journal

Some other aspects of my architecture:

The primary discipline stated in a memo to the technical staff: "The home terminal is to be viewed as the host system nearest the user."

64-bit object ID with the system ID counter bit-reversed from the high order bits. This division of the 64-bits was to be temporary, giving way to a distributed hash that would derive the destination system.

A distributed atomic action protocol based on David P. Reed's thesis that is now realized in the Croquet Project's "tea-time". A major difference being that the object's version ID was made fixed length by allocating a fixed interval of values for the loop counter for each call depth. Reed required a timeout for each of level and I just told him, "OK, if you can demand a timeout, I can demand a state count limit." Arvind and Gostelow's U-interpreter was a virtual dataflow machine with data tokens that were isomorphic to Reed's so I was trying to get them together to do a functional programming model of atomic actions, since they were just two floors from each other in MIT's LCS.

The Forth virtual machine, initially to be burned into the terminal's ROM, would be replaced by a Novix chip or similar derivative in the next generation. This would be the hardware that would interpret the Smalltalk. Moreover, machine-dependent Forth words would have multiple implementations that would be selected based on the type of terminal. My expectation was that the then-recently-discovered type inference and related JIT techniques (pioneered by HP's version of Basic back in the 70s) could make Smalltalk execution on a Novix style chip practical.

Re:Videotex Networking and the American Pioneer (Score:1)

by Hurga ( 265993 ) on Friday March 02, 2012 @11:53AM (#39220623)

You wrote that 30 years ago? That's an extremely visionary piece, I have to admit. And surprisingly current still, considering the fight between established media and social networks.

https://tech.slashdot.org/comments.pl?sid=2702791&cid=39217853sid=2702791&cid=39217853

Monday, September 13, 2021

How I Predicted 9/11

Nearly 30 years ago (see "Race, Gender and the Frontier" for my 40,000ft view of human sociosexual evolution), I started predicting a split among Jews between serious Zionists and the diaspora Jewish traditions.  On that basis I predicted a false flag operation around the turn of the millennium -- what is now known as "9/11".  As part of that split, I foresaw a portion of Zionist Jews being forced into what they themselves would have regarded as "Naziism" of a sort, in what others later called a "clash of civilizations" involving something that might be called "PanWestern Fascism".  It was within this PanWestern Fascism that I foresaw a turn-of-the-millennium "Reichstag Fire" (false flag).  Indeed, the Neocon "Project for a New American Century" fit my prediction nearly perfectly, in its explicit desire for a "Pearl Harbor" event to catalyze the wars that I had already predicted in the public record.  

The following are excerpts from my posts in the Usenet archives:

1996/01/30:
Watch out for the REAL “Reichstag Fire” coming soon to a major media event near you. The OKC bombing may or may not have been an attempt at such a fraud, but it was a failure because it attempted to frame a shallower culture (militias/patriots) rather than deeper cultures (Arabs/Africans).... the conditional probability of the perpetrators of this particular “Reichstag Fire” being caught is MUCH higher than it was with the original version under the Nazis.

1996/07/27:
Oh, I know… you aren’t all that hopped up just yet, but just let a Reichstag incident get blared through your boob-tube enough and you’ll fall right in line…

1997/05/08:
WHEREAS the SS [Synagogue of Satan—JAB] has to put on a really big show for their Christian sheep in the West right around the year 2000 and,

NOW THEREFORE BE IT RESOLVED that a Greater Western Civilization shall be constituted by unifing Judaism, Chrisitanity and Indo-European identity via academic and theocratic sophistry and that said Greater Western Civilization shall declare itself superior to all other cultures extant and that any opposition to or competition with such sophistically justified superiority shall be grounds for any and all actions of fraud and/or violence because God Is On Our Side…

Thus sprach the Synagogue of Satan, on this day T minus a few years before the Greatest Story Ever Told by Hollywood or any other incarnation of the SS since the Diaspora.

1998/03/01:
Israel’s creation was important in the sense that it is time to mop-up JudeoChristianity—the millenium is a good time for that…

Check out the ethnicity of the folks at Clinton’s “town meeting” trying to trump up support for a war in the middle east on behalf of Israel—putting the US in a lonely international position and making people like you and I a target for terrorism or a Jewish-inspired Reichstag.