From: Anne & Lynn Wheeler <lynn@garlic.com> Date: Mon, 21 Dec 2009 00:56:54 -0500 Subject: Re: tty MailingList: hillgang IIOn Sun, 12/20/09 10:27:53, Shmuel wrote:
sorry ... should have typed lower-case "crje" as generic since it was neither CRBE or CJRE ... since as I said, it was just a bunch of software changes that I wrote for hasp to support 2741s & ttys terminals ... along with interactive editor that i wrote (based on the syntax & commands from the cms editor) and embedded in hasp.
in '76, i was till running 2741 at home ... didn't get 300 baud cdi miniterm at home until '77
there were channel attached 3270s at work. early 80 got 1200 baud 3101 (class ascii) ... although we put in 3275 for backus at home in '79.
we had got some early "mod1" 3101s ... and then we got image from japan to burn our own ROMs to turn them into "mod2s" (as well as some mod2 boards).
old email about topaz/3101
https://www.garlic.com/~lynn/2006y.html#email791011
https://www.garlic.com/~lynn/2006y.html#email791011b
in this old post
https://www.garlic.com/~lynn/2006y.html#0
a little later email about generic glass teletype support (in same
post)
https://www.garlic.com/~lynn/2006y.html#email800301
note part of the above email talks about having done some changes to
DMSCIT in CMS to chain multiple queued lines together in one SIO (as
means of reducing CP interaction & overhead ... generic for all "real"
terminal types ... this was in contrast to something with same effect
was done in CP for CMS ... when the "real" terminal type was
3270). this comes in this reference
https://www.garlic.com/~lynn/2009r.html#64 terminal type and queue drop delay
which refers to these posts regarding most recent Hillgang meeting:
https://www.garlic.com/~lynn/2009o.html#80
https://www.garlic.com/~lynn/2009o.html#82
and then ...
https://www.garlic.com/~lynn/2009p.html#37
which in return refers to this old email:
https://www.garlic.com/~lynn/2001f.html#email830420
in this old post
https://www.garlic.com/~lynn/2001f.html#57
the issue was that a lot of work had been done to vm/sp multiprocessor support directed towards improving TPF thruput running in single processor virtual machine on an otherwise relatively idle multiprocessor (issue was that TPF didn't have multiprocessor support and 3081 originally intended to not ship anything but multiprocessors). The downside was that the changes degraded multiprocessor thruput for just about every other customer. To try and compensate ... there was all this stuff done to try and cut the 3270 terminal queue drop/add chatter (as described in the above email) ... but in an indirect way. The indirection didn't always work in the way hoped for ... and it didn't do anything for some large customers that didn't have 3270s ... but large numbers of relatively high-speed glass ascii terminals.
and then there is this old email about getting mod2 3101 ... with 1200
baud vadic
https://www.garlic.com/~lynn/2006y.html#email800311
https://www.garlic.com/~lynn/2006y.html#email800312
https://www.garlic.com/~lynn/2006y.html#email800314
in this post
https://www.garlic.com/~lynn/2006y.html#4
when 3270s first came out ... cms editor was initially modified to do full screen display ... but commands/changes/etc were still effectively "line mode". I had done something similar at the univ. in the 60s ... the univ. had a 2250mod1 (direct channel attach) connected to the 360/67. cp67/cms came with 2250 driver library written at lincoln labs (originally for use with cms fortran programs). I took the lincoln labs 2250 library ... and adapted a version of the cms editor to do "full screen" display on the 2250 (similar to what was later initially done in vm370/cms for 3270s).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDP-10s and Unix Newsgroups: alt.folklore.computers Date: Mon, 21 Dec 2009 10:22:48 -0500jmfbahciv <jmfbahciv@aol> writes:
yes ... "cp67i" ... modified to run on 370 virtual memory "hardware" (and running regularly running for a year, in virtual machine before any hardware was available) ... was used to test the original engineering hardware. folklore has the original engineering 370/145 "cp67i" was ipl/booted and wouldn't work. after some diagnostics, it was determined that the engineers had "reversed" two of the new 370 opcodes. "cp67i" was quickly patched to correspond with the (incorrect) hardware (pending the engineers fixing things).
part of the issues was that "370" (and 360 before it) was being developed for several different machine models at different plant sites around the world (pok, kingston, endicott, Boeblingen, etc). there was the (370) architecture "red book" ... a paper printed copy in "red" 3ring binder ... from a cms script (originally "dot" formating commands ala runoff ... but after GML was invented at the science center, GML "tag" support was also added) file.
a subset of the "red book" was the 370 principles of operation ... printed using a specific cms command line option. the "red book" cms script file had conditionals around what was the additional architecture sections. the issue was that there were completely different engineering teams at different locations doing 370 model implementations ... using totally different technology ... and so required a fairly rigorous definition ... in order to achieve a consistent implementation across all the models (by different engineering teams at different locations).
recent posts mentioning "cp67i" that ran on 370:
https://www.garlic.com/~lynn/2009e.html#4 Cost of CPU Time
https://www.garlic.com/~lynn/2009f.html#33 greenbar
https://www.garlic.com/~lynn/2009g.html#0 Windowed Interfaces 1981-2009
https://www.garlic.com/~lynn/2009h.html#12 IBM Mainframe: 50 Years of Big Iron Innovation
https://www.garlic.com/~lynn/2009i.html#36 SEs & History Lessons
https://www.garlic.com/~lynn/2009k.html#1 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009n.html#67 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2009o.html#79 Is it time to stop research in Computer Architecture ?
https://www.garlic.com/~lynn/2009o.html#82 OpenSolaris goes "tic'less"???
https://www.garlic.com/~lynn/2009p.html#76 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009r.html#38 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#49 "Portable" data centers
posts mentioning GML (markup language) invented at science center
in 1969 (precursor to sgml, html, xml, etc):
https://www.garlic.com/~lynn/submain.html#sgml
misc past posts mentioning 370 architecture "red book":
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/2000.html#2 Computer of the century
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
https://www.garlic.com/~lynn/2001b.html#55 IBM 705 computer manual
https://www.garlic.com/~lynn/2001c.html#68 IBM Glossary
https://www.garlic.com/~lynn/2001m.html#39 serialization from the 370 architecture "red-book"
https://www.garlic.com/~lynn/2001n.html#43 IBM 1800
https://www.garlic.com/~lynn/2002b.html#48 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002g.html#52 Spotting BAH Claims to Fame
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
https://www.garlic.com/~lynn/2002h.html#69 history of CMS
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#59 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003d.html#76 reviving Multics
https://www.garlic.com/~lynn/2003f.html#44 unix
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003k.html#45 text character based diagrams in technical documentation
https://www.garlic.com/~lynn/2003k.html#52 dissassembled code
https://www.garlic.com/~lynn/2003l.html#11 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2003n.html#29 Architect Mainframe system - books/guidenance
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2004c.html#1 Oldest running code
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#51 [OT] Lockheed puts F-16 manuals online
https://www.garlic.com/~lynn/2004d.html#43 [OT] Microsoft aggressive search plans revealed
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004k.html#45 August 23, 1957
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005i.html#40 Friday question: How far back is PLO instruction supported?
https://www.garlic.com/~lynn/2005j.html#39 A second look at memory access alignment
https://www.garlic.com/~lynn/2005j.html#43 A second look at memory access alignment
https://www.garlic.com/~lynn/2005k.html#1 More on garbage
https://www.garlic.com/~lynn/2005k.html#58 Book on computer architecture for beginners
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#48 Good System Architecture Sites?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2006c.html#45 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#24 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#15 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006h.html#55 History of first use of all-computerized typesetting?
https://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006l.html#47 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006m.html#53 DCSS
https://www.garlic.com/~lynn/2006o.html#59 Why no double wide compare and swap on Sparc?
https://www.garlic.com/~lynn/2006p.html#55 PowerPC or PARISC?
https://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006x.html#25 Executing both branches in advance ?
https://www.garlic.com/~lynn/2007.html#30 V2X2 vs. Shark (SnapShot v. FlashCopy)
https://www.garlic.com/~lynn/2007b.html#31 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007d.html#32 Running OS/390 on z9 BC
https://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007h.html#14 conformance
https://www.garlic.com/~lynn/2007i.html#31 Latest Principles of Operation
https://www.garlic.com/~lynn/2007m.html#26 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007q.html#8 GETMAIN/FREEMAIN and virtual storage backing up
https://www.garlic.com/~lynn/2007r.html#23 Abend S0C0
https://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
https://www.garlic.com/~lynn/2007u.html#30 folklore indeed
https://www.garlic.com/~lynn/2007v.html#0 IBM mainframe history, was Floating-point myths
https://www.garlic.com/~lynn/2007v.html#21 It keeps getting uglier
https://www.garlic.com/~lynn/2007v.html#46 folklore indeed
https://www.garlic.com/~lynn/2008.html#53 Really stupid question about z/OS HTTP server
https://www.garlic.com/~lynn/2008c.html#29 New Opcodes
https://www.garlic.com/~lynn/2008d.html#67 Throwaway cores
https://www.garlic.com/~lynn/2008l.html#47 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008m.html#57 "Engine" in Z/OS?
https://www.garlic.com/~lynn/2008m.html#90 z/OS Documentation - again
https://www.garlic.com/~lynn/2008s.html#77 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009c.html#5 history of comments and source code annotations
https://www.garlic.com/~lynn/2009d.html#24 Can TOD (STCKE) be compressed into 12 bytes
https://www.garlic.com/~lynn/2009j.html#67 DCSS
https://www.garlic.com/~lynn/2009m.html#34 IBM Poughkeepsie?
https://www.garlic.com/~lynn/2009q.html#19 Mainframe running 1,500 Linux servers?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: While watching Biography about Bill Gates on CNBC last Night Newsgroups: alt.folklore.computers Date: Mon, 21 Dec 2009 10:50:54 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
VM supported FBA (effectively nearly all vm&cms had been logical FBA from the start ... having to simulate on top of CKD ... 2311s & 2314s). Problem was MVS ... and services that had implementations dependent on multi-track search (like vtocs & pds directories). I tried to offer FBA support ... and was told it would cost $26m to ship (documentation and eduction) even if I provided them with fully tested & integrated production code. I was told I had to come up with business ROI to cover the $26m ... that customers were buying so much disk ... that they would just switch to buying same amount of FBA, that had been CKD (no incremental revenue). I was told that I couldn't use the lifecycle costs (the enormous amount of money that has gone into CKD simulation when all the underlying hardware is now FBA .. as well as infrastructure overhead and performance degradation related to the simulation).
some disk storage history (this is not nearly as extensive since
disk division in san jose was unloaded to hitachi):
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_chrono20.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "Portable" data centers Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 21 Dec 2009 11:17:31 -0500chrismason@BELGACOM.NET (Chris Mason) writes:
big change going from 155/165 to 158/168 was different real storage technology that was more compact/smaller and about 4-5 times faster. The 168 microcode engineers also point out that they reduced the avg. machine cycle/370 instruction from (370/165) 2.1 machine cycles to (370/168) 1.6 machine cycles.
the faster memory met than when there was a cache miss ... that the machine waited shorter time for the data.
the low/mid range 370s had "vertical microcode" engines ... more like familiar computers ... and implementation tended to avg. ten native instructions per 370 instruction (some simularity to current day software mainframe simulators running on intel platforms).
high end 370s had "horizontal microcde" engines ... where the "native" instruction could start lots of different operations in parallel ... as a result there was some amount of overlap in things that were going on ... and so instead of measure native instructions per 370 instruction ... it was avg. machine cycles per 370 instruction.
part of the issue was that there was an early joint project between the science center and endicott to modify cp67 to provide full 370 (virtual memory) virtual machines running on 360/67 (i.e. some of the new instructures and some format differences between 360/67 virtual memory tables and 370 virtual memory tables). This was "cp67h" system (running on 360/67). Then there was modifications to cp67 to run on 370 virtual memory ("cp67i"). The cp67i system was regularly running in 370 virtual machine a year before any real 370 virtual memory hardware became available. In fact, booting/ipling cp67i was originally used to verify very first engineering machine with virtual memory hardware (370/145). for a long period it was cp67i that was running on all of the increasing nuumbers of internal 370/145s with virtual memory (and later "cp67sj" ... which as "cp67i" with support for 3330 & 2305 devices ... added by san jose).
recent x-over post from a.f.c about this early period with cp67i
https://www.garlic.com/~lynn/2009s.html#1 PDP-10s and Unix
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: While watching Biography about Bill Gates on CNBC last Night Newsgroups: alt.folklore.computers Date: Mon, 21 Dec 2009 12:08:17 -0500jmfbahciv <jmfbahciv@aol> writes:
the customer was ordering the machine ... in part of punish IBM for having done something that horribly offended them ... and was going thru the order regardless (i use to go down and sit with the customer ... as opposed to sit with the local branch office people ... and so got blow-by-blow details).
i interpreted the desire to put me on location ... was an attempt to obfuscate the real issues ... misdirection that there were possibly technical issues involved ... which was really a waste of my time.
but it possibly was not one of the best career enhancing moves ... along with ridiculing the FS effort ... or getting blamed for computer conferencing on the internal network ... supposedly one of the people involved in offending the customer was best buds with our CEO ... and if I didn't help with the misdirection and appear to take the blame when the customer didn't succumb to my technical persuasion ... it would be a black mark against me (supposedly I could expect rewards if I appeared to take the bullet).
recent posts drawing analogy with boyd's to be or to do
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
that many disk drives ... could have enuf stuff going on in parallel to keep processors busy.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: While watching Biography about Bill Gates on CNBC last Night Newsgroups: alt.folklore.computers Date: Mon, 21 Dec 2009 17:14:38 -0500Charles Richmond <frizzle@tx.rr.com> writes:
we had done some stuff on SCI before we left ... and afterwards periodically talked to Convex about their SCI (Exemplar) ... and actually got paid to do some consulting for Steve Chen who was then CTO up at Sequent. we also got called in to talk to the guy at HP responsible ... after HP picked up convex.
misc. recent posts:
https://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2009f.html#26 greenbar
https://www.garlic.com/~lynn/2009h.html#28 Computer virus strikes US Marshals, FBI affected
https://www.garlic.com/~lynn/2009i.html#22 My Vintage Dream PC
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009p.html#55 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009p.html#58 MasPar compiler and simulator
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "Portable" data centers Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 21 Dec 2009 17:31:33 -0500rfochtman@YNC.NET (Rick Fochtman) writes:
retrofitting virtual memory to 370/165 was extremely difficult ... in fact, some number of features in 370 virtual memory had to be dropped to make 370/165 easier/possible/timely.
retrofitting virtual memory to 370/195 would have been significantly more difficult.
misc. recent posts mentioning 370/195
https://www.garlic.com/~lynn/2009m.html#34 IBM Poughkeepsie?
https://www.garlic.com/~lynn/2009m.html#75 Continous Systems Modelling Package
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009p.html#82 What would be a truly relational operating system ?
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#59 "Portable" data centers
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Union Pacific Railroad ditches its mainframe for SOA Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 22 Dec 2009 09:31:04 -0500timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
which acted as intermediary between webservers and the payment infrastructure.
two of the people in jan92 ha/cmp cluster scale-up meeting, mentioned
here
https://www.garlic.com/~lynn/95.html#13
later leave and show up at small client/server startup responsible for something called "commerce server" ... which was a multi-store, virtual mall-like paradigm with heavy oracle backend. we had also left and were asked to come in to consult because they wanted to do payment transactions on the server; the small client/server startup had also invented this technology called SSL that they wanted to use. The result is now frequently called "electronic commerce".
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Union Pacific Railroad ditches its mainframe for SOA Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 22 Dec 2009 12:01:20 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
the gateway started out as rs/6000 ha/cmp configuration with multiple (diverse) internet connections from different ISPs ... and some number of misc. boxes around the perimeter serving various integrity and security purposes.
i had originally started out planning on advertising alternate routes on the internet backbone ... but during the payment gateway they announced that the internet backbone was converting to hierarchical routing. As a result I had to fall back to multiple A-record ... for alternate path.
sort of standard SLA for high-volume merchant is trouble desk, 5-minute first-level problem determination, ... very early pilot had trouble call that was closed as NTF after 3hrs.
I specified recovery and diagnostic critera that had to go into webserver talking to the payment gateway (something like done previously for mainframe stuff as well as ha/cmp) ... inventing a bunch of compensating procedures and writing a trouble shooting guide. I put together matrix of 20-30 failure modes and 5-6 states and the webserver/payment gateway interaction had to demonstrate recovery &/or diagnoses for all possible conditions ... as part of my final signoff.
one of the issues was that I didn't have final signoff was the browser/client code. Early major commerce server was sports product that did advertising on sunday football games. got them to put in multiple ISP connections ... but one of their ISPs had regularly scheduled maintenance all day sunday on rotating cities across the country. there was some guarantee that whole class of users wouldn't be able to reach the website during half-time (anticipated high traffic) for at least one sunday. browser people said that the client multiple A-record support was too complicated (i.e. wasn't part of college example programs) ... even after I provided them with example client multiple A-record support from tahoe 4.3 distribution. It took another year to get multiple A-record support into their client.
now tcp/ip is the technology basis for the modern internet ... but
nsfnet backbone can be considered the operational basis for the modern
internet and cix the business basis for the modern internet ... some
old email regarding nsfnet backbone activity
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
and past posts mentioning nsfnet backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Union Pacific Railroad ditches its mainframe for SOA Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 22 Dec 2009 12:12:01 -0500re:
oh ... and for the fun of it ... some past posts mentioning doing some
work on a 450+k statement cobol program running on some 40+ fully
tricked out CECs ... where many of the payment transactions
(not just electronic commerce) actually get processed:
https://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?
https://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007n.html#67 least structured statement in a computer language. And the winner
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2007v.html#64 folklore indeed
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC derivatives warns State Street
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 22 Dec, 2009 Subject: Why don't people use certificate-base access authentication? Blog: Payment Systems Networkre:
when one of the certificate oriented payment specifications was 1st released ... we did a public-key profile for the end-to-end process and got somebody that was working with public key library (they had done speedups on the standard library by factor of four times) do some benchmarks. when we reported the results ... we were told the numbers were too slow (instead of being told the numbers were four times too fast because of using a speeded up library). Six months later when some pilot projects were tested ... our earlier profile benchmark numbers were within a couple percent of measured (the speedups had been integrated into widely used public key library).
... in addition to appended certificates representing a 100-times
payload bloat for standard payment transaction ... the
certificate-related public key ops were also resulting in 100-times
processing bloat.
https://www.garlic.com/~lynn/subpubkey.html#bloat
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: While watching Biography about Bill Gates on CNBC last Night Newsgroups: alt.folklore.computers Date: Tue, 22 Dec 2009 16:06:59 -0500jmfbahciv <jmfbahciv@aol> writes:
recent reference to doing some optimization work on 450+k statement
cobol program that was loading down 40+ fully tricked out CECs (i.e.
40+ "mainframes" with as many processors and storage that could be
configured):
https://www.garlic.com/~lynn/2009s.html#9 Union Pacific Railroad ditches its mainframe for SOA
other posts in this thread:
https://www.garlic.com/~lynn/2009r.html#33 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#37 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#38 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#39 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#40 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#41 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#42 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#54 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#66 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#67 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#70 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#71 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#76 While watching Biography about Bill Gates on CNBC last Night
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
Date: Tue, 22 Dec 2009 00:34:16 -0500 From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: user group meetings MailingList: hillgang IIfrom long ago and far away ...
there were lots of vm installations in the bay area ... including SLAC
... hosting baybunch ... and tymshare ... which provided vmshare
http://vm.marist.edu/~vmshare/
following took a whole lot of sign-offs to bring in a tape from
tymshare every month ... there was worry that there might be some
contamination bringing stuff in from the outside:
Date: 03/07/80 16:14:30
From: wheeler
have gotten approval from everyone(?) to obtain monthly distribution
of VMSHARE data from Tymshare
... snip ... top of post, old email index.
I put it up on SJR ... and anybody on the internal network could access it via "DATASTAG" ... sort of an ftp/anonymous like facility ... this was before TOOLSRUN.
I also put it up on the HONE system (vm370-based world-wide online sales&marketing support). The US HONE datacenters had been consolidated in the mid-70s in the bayarea (was in bldg. that is located next to current facebook bldg ... if somebody wants to do lookup on online satellite photos, facebook address can be found in most of the expected places ... the old HONE datacenter has a different occupant now) ... and it wasn't very far from SLAC.
One of my hobbies for a long time was doing internal "product" distribution of highly enhanced cp67 ... and then vm370 systems. HONE was one of my earliest customers starting in the cp67 days and continuing with transition to vm370 and into the 80s.
misc. past email mentioning vmshare
https://www.garlic.com/~lynn/lhwemail.html#vmshare
misc. past email mentioning hone
https://www.garlic.com/~lynn/lhwemail.html#hone
misc. past posts discussing hone (&/or apl)
https://www.garlic.com/~lynn/subtopic.html#hone
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Wed, 23 Dec 2009 03:08:12 -0500Mark Crispin <mrc@panda.com> writes:
also referenced here:
https://en.wikipedia.org/wiki/Leland_Stanford,_Jr.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Wed, 23 Dec 2009 10:31:22 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
for some reason ... i'm fairly certain singing ...
On the Leland Stanford Junior Varsity Farm.
instead of given in above:
On the Leland Stanford Junior Farm.
... it went along with singing 99 bottles of beer on the wall. type of stuff in junior high riding sports/school bus to away games.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Real Meaning of SysRq Newsgroups: alt.folklore.computers Date: Wed, 23 Dec 2009 17:00:17 -0500kind of like the marine bumper sticker
"when it positively, absolutely, has to be destroyed overnight"
also
https://www.garlic.com/~lynn/2009q.html#69 Now is time for banks to replace core system according to Accenture
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Why Coder Pay Isn't Proportional To Productivity Newsgroups: alt.folklore.computers Date: Wed, 23 Dec 2009 17:26:57 -0500Why Coder Pay Isn't Proportional To Productivity
Why programmers are not paid in proportion to their productivity
http://www.johndcook.com/blog/2009/12/23/why-programmers-are-not-paid-in-proportion-to-their-productivity/
... i've periodically used KISS (or maybe it is the lines of code that you don't write).
this frequently came in when doing cp67 & vm370 ... more akin to microkernel ... periodically traditional operating system approaches would add more & more to the monitor/microkernel ... in part because at the start, it was so small and easy to modify ... but unbridled adding ... turns it into large, unwiedly and possibly "spaghetti" code. then it is in need of large cut&burn.
i had to do something like that for i/o subsystem for the disk
engineering lab ... so that they could do on-demand, concurrent testing
of multiple devices under development. misc. past posts mentioning
getting to play disk engineer (& doing operating system for them ... at
one time they had tried standard MVS and experienced 15min MTBF):
https://www.garlic.com/~lynn/subtopic.html#disk
recent post that part of the rewrite ... significantly reduced the total
lines of code, the total pathlength ... and added more function ... with
unintended side-effect better alternate path operation
https://www.garlic.com/~lynn/2009q.html#79 Now is time for banks to replace core system according to Accenture
which was related to this post about making the redrive logic
significantly (smaller) faster ... resulted in degradation:
https://www.garlic.com/~lynn/2009r.html#52 360 programs on a z/10
also mentioned in earlier post in the previous thread:
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
post about something that resulted in 100* (hundred fold) increase in
bloat
https://www.garlic.com/~lynn/2009r.html#72 Why don't people use certificate-based access authentication?
https://www.garlic.com/~lynn/2009s.html#10 Why don't people use certificate-base access authentication
doing something that didn't result in such enormous bloat wasn't viewed as nearly as attractive ... since it wouldn't appear that nearly as much could be charged (i.e. getting paid less for KISS seems to permeate the whole value chain).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
Date: Wed, 23 Dec 2009 18:21:56 -0500 From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: old email MailingList: hillgang IIOn 22 Dec 2009 00:34:16, Lynn wrote:
I had a bunch of stuff archived from late 60s & most of the 70s that had been replicated on three different tapes ... all in the almaden datacenter tape library. the datacenter had problem where random tapes were being mounted for scratch tapes ... and I lost several dozen tapes (including three replicated archive).
about the only thing that survived was the original cms multi-level update ... which had been nearly all done in exec.
melinda had contacted me asking for information about original cms multi-level update ... shortly before the almaden tape problems ... fortunate I was able to pull off the original ... and send her a copy (before the tapes were destroyed).
old email from Melinda
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850908
in this post
https://www.garlic.com/~lynn/2006w.html#42
which also goes into some detail about the tapes lost in the period when random tapes were being mounted for scratch.
Oriignal cp67 & cms source update started out with an exec that would check for a (single) update file ... apply it to the assembler file ... generating a temporary/work file ... and assemble the temporary/work file (as opposed to the original assembler file).
as undergraduate, i was making so many source changes ... i got tired of carefully/manually doing the sequence numbers in the updates ... and so came up with the "$" convention. I then did a quick&dirty simple assembler program that read the update file and did the "$" convention ... generating a temporary/work "update" file ... which was then passed to the update command (rather than the original update file) ... with appropriate modifications to the exec (that applied updates and did the assembly).
the multilevel stuff started about the same time as the joint project with endicott to do 370 virtual machines in cp67 (running on 360/67).
base cp67 then became "L" updates ... mostly a whole bunch of updates that I had for the production cp67 system (cp67l system). then there were "H" updates ... which were the updates to add 370 virtual machine support to cp67 (cp67h system). Then there were the "L" updates ... modifications to cp67 to run on 370 architecture (rather than 360/67 architecture ... for cp67i system). A couple people from San Jose did the "sj" updates ... that added 3330 & 2305 device support to "cp67i" system.
Because of security issues with unannounced virtual memory ... and many non-employee people with access to the cambridge system (students and others from educational institutions in the boston & cambridge area) ... "cp67h" ... normally ran in a 360/67 virtual machine (to avoid leaking to non-authorized employees even the existence of virtual 370 option). Then, cp67i would run in a (cp67h) 370 virtual machine. Then there would be cms running in a cp67i virtual machine. cp67i was running in regular use a year before the first engineering 370 with virtual memory support was operational.
Later, cp67i & cp67sj systems saw extensive use inside the corporation .... running for quite some time on (real) 370 (virtual memory) machines
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Wed, 23 Dec 2009 19:57:19 -0500Terje Mathisen <"terje.mathisen at tmsw.no"> writes:
from the days of scarce, very expensive electronic storage ... especially disk channel programs ... used "self-modifying" operation ... i.e. read operation would fetch the argument used by the following channel command (both specifying the same real address). couple round trips of this end-to-end serialization potentially happening over 400' channel cable within small part of disk rotation.
trying to get a HYPERchannel "remote device adapter" (simulated mainframe channel) working at extended distances with disk controller & drives ... took a lot of slight of hand. a copy of the completedmainframe channel program was created and downloaded into the memory of the remote device adapter .... to minimize the command-to-command latency. the problem was that some of the disk command arguments had very tight latencies ... and so those arguments had to be recognized and also downloaded into the remote device adapter memory (and the related commands redone to fetch/store to the local adapter memory rather than the remote mainframe memory). this process was never extended to be able to handle the "self-modifying" sequences.
on the other hand ... there was a serial-copper disk project that effectively packetized SCSI commands ... sent them down outgoing link ... and allowed asynchronous return on the incoming link ... eliminating loads of the scsi latency. we tried to get this morphed into interoperating with fiber-channel standard ... but it morphed into SSA instead.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDP-10s and Unix Newsgroups: alt.folklore.computers Date: Wed, 23 Dec 2009 19:15:37 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
"PUNCH" was a valid assembler statement ... which basically generated a card image. "stage1" sysgen assembler macros were extensive "PUNCH" statements ... that might generate approaching a box (2000) of cards.
this was a single "job" (stage2) sysgen ... with 50-80 EXEC (job) steps ... run in sequence ... very few assembler steps ... mostly IEBCOPY & IEHMOVE (with the IEBCOPY & IEHMOVE steps possibly specifying hundreds of individual program members).
all of this was normally done with (stripped down) "starter" system ... a barebones os360 that conceivably ipled(/booted) on any machine. frequently, a sysgen required a shift or two of dedicated machine time.
i decided that I could improve on the process (in part because they would pre-empt my weekend use ... when I wanted the machine to do other stuff) ... for instance being able to run stage2 on production system with HASP ... could significantly speed things up. also if a could carefull re-org the statements in stage2 ... I could change the order of files and library members location on disk ... attempting to achieve optimal arm seek operation (the issue was things started at zero and proceeded from there, to have disk arm locality involved changing order). so some of this involved reorder EXEC steps, some involved reordering control statements in IEBCOPY/IEHMOVE steps ... some involved moving existing (copy/move) control statements to new/different steps.
for student job workload (before watfor), the careful ordering increased thruput by three times (in large part optimized disk arm motion) compared thruput of a default sysgen.
before i got cms ... i did all this with physical cards ... with cms ... i put it into cms and used cms editor (and other things) to re-arrange the statements.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Wed, 23 Dec 2009 21:07:19 -0500Robert Myers <rbmyersusa@gmail.com> writes:
concurrent with fiber channel work was SCI ... sci was going after asynchronous packetized SCSI commands ... akin to fiber channel and serial-copper ... but also went after asynchronous packetized memory bus.
the SCI asynchronous packetized memory bus was used by convex for exemplar, sequent for numa-q ... DG near its end did something akin to numa-q ... SGI also did flavor.
part of the current issue is that oldtime real storage & paging latency to disk (in terms of count of processor cycles) ... is comparable to current cache sizes and cache miss latency to main memory.
i had started in mid-70s saying that major system bottleneck was shifting from disk/file i/o to memory. in the early 90s ... the executives in the disk division took exception with some of my statements that relative system disk thruput had declined by an order of magnitude over a period of 15 years (cpu & storage resources increased by factor of 50, disk thruput increased by factor of 3-5) ... they assigned the division performance group to refute my statements ... after a couple weeks they came back and effectively said that I had understated the situation.
part of this was from some work i had done as undergraduate in the 60s on dynamic adaptive resource management ... and "scheduling to the bottleneck" (it was frequently referred to as "fair share" scheduling ... since the default policy was "fair share") ... dynamically attempting to adjust resource management to system thruput bottleneck ... required being able to dynamically attempting to recognize where the bottlenecks were.
misc. past posts mentioning dynamic adaptive resource managerment (and
"fair share" scheduling)
https://www.garlic.com/~lynn/subtopic.html#fairshare
when i was doing hsdt ... some of the links were satellite ... and I had to redo how the satellite communication operated. a couple years later there was presentation at IETF meeting with presentation that mentioned cross-country fiber gigabit bandwidth*latency product ... it turned out the product was about the same as the product I had dealt with for high-speed (geo-sync) satellite (latency was much larger while the bandwidth was somewhat smaller ... but the resulting product was similar).
there are still not a whole lot of applications that actually do coast-to-coast full(-duplex) gigabit operation (full concurrent gigabit in both directions).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDP-10s and Unix Newsgroups: alt.folklore.computers Date: Thu, 24 Dec 2009 10:52:24 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
fortran-g 3-step compile, link-edit, & go ... was almost nothing about the student program ... it was almost all job scheduler related stuff for the 3 steps ... and bunch of other stuff ... like whole string of transient routines for (file) open/close system services.
watfor was single step monitor ... it loaded ... read program source, generated code in memory, and executed it, then read next. student jobs were typically around 20-40 cards. operators would accumulate half-tray to tray of cards (maybe 1000-2500, 50-200 jobs) and feed them into HASP. The job scheduler elapsed time to get the single-step watfor loaded & running ... could still be longer than the elapsed time for watfor to run thru 100 student jobs (even after I got job scheduler, open/close transient routines, lots of other stuff ... running three times faster).
big overhead was arm always having to move back to disk VTOC (master file directory) on cylinder zero ... for nearly any kind of operation. My default strategy in stage2 sysgen, was to place copy/move statements in order of highest use ... since copy/move started at lowest available disk address and moved out (highest use would be closest to vtoc ... which was the most frequent place for disk arm).
os/360 release 15/16 introduced being able to specify cylinder for the VTOC ... aka like in the middle of the drive. then things got more complex ... attempting to force allocation and copy/move statements to start on both sides of VTOC (located in the middle of the drive) ... and move outwards.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Thu, 24 Dec 2009 11:36:33 -0500Terje Mathisen <"terje.mathisen at tmsw.no"> writes:
but it was all in main processor real storage ... so search operations that compared on something would be constantly fetching the search argument from main memory. lots of latency and heavy load on path. frequently channel was supposed to have lots of concurrent activity ... but during a search operation ... the whole infrastructure was dedicated to that operation ... & locked out all other operations.
Issue was that design point was from early 60s when I/O resources were really abundant and real-storage was very scarce. In the late 70s, I would periodically get called into customer situations (when everybody else had come up dry).
late 70s, large national retailer ... several processors in loosely-coupled, shared-disk environment ... say half-dozen regional operations with processor complex per region ... but all sharing the same disk with application program library.
program library was organized in something called PDS ... and PDS directory (of programs) was "scanned" with multi-track search for every program load. this particular environment had a three "cylinder" PDS directory ... so avg. depth of search was 1.5 cylinders. This was 3330 drives that spun at 60 revs/sec and had 19 tracks per cylinder. The elapsed time for a multi-track search of whole cylinder ran 19/60s of a second ... during which time the device, (shared) device bcontrollerb, and (shared) channel was unavailable for any other operations. Drive with the application library for the whole complex was peaking out at about six disk I/Os per second (2/3rds multi-track search of the library PDS directory and one disk I/O to load the actual program, peak maybe two program loads/sec).
before I knew all this ... I'm brought into class room with six foot long student tables ... several of them covered with foot high piles of paper print outs of performance data from the half different systems. Basically print out for specific system with stats showing activity for 10-15 minute period (processor utilization, and i/o counts for individual disks, other stuff) ... for several days ... starting in the morning and continued during the day.
Nothing stands out from their description ... just that thruput degrades enormously under peak load ... when the complex is attempting to do dozens of program loads/second across the whole operation).
I effectively have to integrate the data from the different processor complex performance printouts in my head ... and then do the correlation that specific drive (out of dozens) is peaking at (aggregate) of 6-7 disk i/os per second (across all the processors) ... during periods of poor performance (takes 30-40 mins). I then get out of them that drive is the application program library for the whole complex with a three cylinder PDS directory. I then explain how PDS directory works with multi-track search ... and the whole complex is limited to two program loads/sec.
The design trade-off was based on environment from the early 60s ... and was obsolete by the mid-70s ... when real-storage was starting to get abundant enough that the library directory could be cached in real storage ... and didn't have to do rescan on disk for every program load.
lots of past posts mentioning CKD DASD (disk) should have moved
away from multi-track search several decades ago
https://www.garlic.com/~lynn/submain.html#dasd
other posts about getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
the most famous was ISAM channel programs ... that would could go thru things like multi-level index ... with "self-modifying" ... where an operation would read into real storage ... the seek/search argument(s) for following channel commands (in the same channel program).
ISAM resulted in heartburn for the real->virtual transition. Channel programs all involved "real" addresses. For virtual machine operation ... it required a complete scan of the "virtual" channel program and making a "shadow" ... that then had real addresses (in place of the virtual addresses), and executing the "shadow" program. Also seek arguments may need to be translated in the shadow (so the channel program that was actually being executed no longer referred to the address that the self-modifying arguments was happening).
The old time batch, operating system ... with limited real-storage ... also had convention that the channel programs were built in the application space ... and passed to the kernel for execution. In their transition from real to virtual storage environment ... they found themselves faced with the same translation requirement faced by the virtual machine operating systems. In fact, they started out by borrowing the channel program translation routine from the virtual machine operating system.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Thu, 24 Dec 2009 14:00:48 -0500re:
so one of the first SANs was at NCAR with pool of IBM CKD dasd, an IBM 43xx (midrange) mainframe, some number of "supercomputers", and HYPERCHannel.
all the processors could message each other over HYPERchannel and also access the disks. The IBM mainframe acted as SAN controller ... getting requests (over hyperchannel) for data ... potentially have to first stage it from tape to disk ... using real channel connectivity to ibm disks.
ibm disk controllers had multiple channel connectivity ... at least one to the "real" ibm channel and one to the HYPERchannel remote device adapter, emulated channel. The A515 was an upgraded remote device adapter that had capability of downloading both the full channel program into local memory ... as well as support for the dasd seek/search arguments into local memory (could distinguish between address references for the seek/search arguments in local memory via-a-vis the read/write transfers that involved "host" memory addresses.
the ibm mainframe would load the channel program (to satisfy the data request, from some supercomputer) into the memory of the A515 ... and then respond to the requesting supercomputer with the "handle" of the channel program in one of the A515s. The supercomputer would then make a request to that A515 for the execution of that channel program ... transferring the data directly to the supercomputer ... w/o having to go thru the ibm mainframe memory ... basically "control" went thru ibm mainframe ... but actual data transfer was direct.
later, there was standardization work on HIPPI (and FCS) switches to allow definition of something that would simulate the NCAR HYPERchannel environment and the ability to do "3rd party transfers" ... directly between processors and disks ... w/o having to involve the control machine (setting it all up) in the actual data flow.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why Coder Pay Isn't Proportional To Productivity Newsgroups: alt.folklore.computers Date: Thu, 24 Dec 2009 17:57:30 -0500"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
slight x-over from this thread
https://www.garlic.com/~lynn/2009r.html#56 You know you've been Lisp hacking to long when
https://www.garlic.com/~lynn/2009r.html#65 You know you've been Lisp hacking to long when
and whether or not actually proficient in a language ... somewhat akin to writing a really, really bad poem in a language and totally lacking in any proficiency in that language ... sometimes it seems that the majority of the programmers in the world are severely lacking in proficiency in whatever language that they are writing programs in (possibly assuming that if they are proficient in, say english, that is sufficient to qualify them as proficient in any other language).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDP-10s and Unix Newsgroups: alt.folklore.computers Date: Thu, 24 Dec 2009 18:09:33 -0500jmfbahciv <jmfbahciv@aol> writes:
I had os/360 i could assemble it with and I had a BPS loader that I could put on the front to load the program.
before that I had looked in on a 360 assembler class (I wasn't taking it) that was taught before the univ. got a 360 ... it was using a 360 assembler that ran on 709.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDP-10s and Unix Newsgroups: alt.folklore.computers Date: Fri, 25 Dec 2009 10:14:47 -0500jmfbahciv <jmfbahciv@aol> writes:
os/360 came with "starter system" disk ... image actually came on
tape that could be booted, restoring the image to disk. the "starter
system" disk is booted and used to generate a tailored system.
https://www.garlic.com/~lynn/2009s.html#19 PDP-10s and Unix
aka os/360s were delivered to customers with lots of support infrastructure ... even the really small 360s that ran with BPS (basic programming system ... aka card based, machines that only had unit record and no attached tapes or disks).
tapes (that could be booted) was distribution for lots of material ... even some "BPS" (basically card system) ... the "BPS" stuff ... say on distribution tape ... had utilities that could punch files from the tape. Otherwise distribution had to be in actual cards.
"BPS" loader was 80-100(?) cards "self-loading" card deck ... which would load a (following) standard output deck from assembler/compiler.
There was also "BPS" self-loading application that would take executable input card deck (following) and punch a "self-loading" executable version. This was used (among other things) to generate a copy of the "BPS" loader (after assembling the source).
360s came with lots of microprogramming so it was rare to have requirement to toggle boot sequence from the front panel. When I was debugging my monitor ... i used the front panel to stop exeuction and then single instruction step execution ... and alter memory (instructions and data) ... but never found I had to toggle in boot sequence.
The 360 boot (IPL) process ... and three rotary switches to select device address and an IPL boot to start the process. The IPL process was defined to read 24 bytes from the selected address into location zero (and all devices were defined to respond to default read operation). The 2nd & 3rd 8bytes were assumed to be channel program op-codes ... which were then executed in an (microcode boot) I/O operation. After that I/O operation compeleted, the 8 bytes at location zero was loaded as PSW (program status word) and started execution.
The standard card reader i/o read operations and standard tape i/o read operations were the same ... so 80 record card images could be placed on tape ... and the IPL/boot sequence was identical whether it was real cards from card reader ... or card images on a tape (except the IPL address in the rotary switch).
There was "infamous" 3-card loader (first 24 bytes for boot process and two 80-byte cards ... 160 bytes of instructions) ... the "images" could be put into assembler "PUNCH" statements (referenced in sysgen process above). Including this in assembler program effectively turned nearly any resulting output of the assembler into "self-loading" program. The big difference between the 3-card loader and the (larger) BPS loader ... was the BPS loader would handle multiple different assembled/compiled programs and handle symbol/address resolution between the different outputs.
Later I got some additional familiarity with boot sequence when working on CP67 ... because CP67 had to simulate the IPL/boot process for the virtual machine.
do search engine for 3card loader ... turns up one of my posts:
http://www.mail-archive.com/ibm-main@bama.ua.edu/msg43867.html
also here
https://www.garlic.com/~lynn/2007f.html#1 IBM S/360 series operating systems history
the above also mentions some enhancements that I made in the early booting sequence of cp67.
another reference to 3card loader ... related to the hercules 390 emulator:
http://www.cbttape.org/~jjaeger/cdrom.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Fri, 25 Dec 2009 10:44:19 -0500jmfbahciv <jmfbahciv@aol> writes:
other
http://www.mcjones.org/dustydecks/archives/2009/10/14/106/
when i first transferred to SJR ... backus office was about 6-8 doors down from mine.
ibm 704 fortran manual
http://www.fortran.com/ibm.html
fortran wiki
https://en.wikipedia.org/wiki/Fortran
os/360 fortran program jcl might be something like
step exec pgm=* ft06f001 dd sysout=a ft05f001 dd *
CMS imported lots of os/360 applications, compilers, etc and had a
os/360 emulation layer. fortran os/360 (i/o) execution libraries would
have OPEN for ft05f001 & ft06f001 ... cms uses FILEDEF command to
simulate the os/360 "DD" statement
http://wwwasdoc.web.cern.ch/wwwasdoc/zebra_html3/node86.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why Coder Pay Isn't Proportional To Productivity Newsgroups: alt.folklore.computers Date: Fri, 25 Dec 2009 10:46:59 -0500jmfbahciv <jmfbahciv@aol> writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: channels, was Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Fri, 25 Dec 2009 11:26:23 -0500John Levine <johnl@iecc.com> writes:
started to change for 303x. 370 had been assumed to die during the
"future system" effort era ... and so the product pipelines were
allowed to dry up. after FS was killed ... misc. past FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
there was mad rush to get stuff back into the 370 product pipelines. Part of that was 303x series. To make the 303x "channel director" (external channels), they took the 370/158 engine integrated channel microcode ... and made it as separate box (w/o 370 microcode). A 3031 then became two 370/158 boxes/engines ... one with only the integrated channel microcode ... and one with only the 370 microcode. A 3032 was a 370/168 reconfigured to work with 303x channel director (370/158 integrated channels as separate box). A 3033 started out being the 168 wiring diagram remapped to 20% faster chips. They chips also had something like 10 times the circuits per chip ... originally going unused ... however, late in product development cycle ... there was was targeted redesign to better utilize on-chip ... and 3033 came up approx. 50% faster than 168.
big change going from 370 to 370-xa was having queued i/o. the issue
was the enormous pathlength in MVS to take and interrupt and "redrive" a
queued request (moving it outboard eliminated that synchronous latency
overhead in MVS). Recent post in mainframe mailing list about getting
into trouble in this area ... having generated a super-fast device i/o
redrive ... directly in 370 assembler
https://www.garlic.com/~lynn/2009r.html#52 360 program on a z/10
part of this was rewriting i/o supervisor for the disk engineering & product test lab ... to never fail. the labs had attempted to use MVS ... but found it had a 15min MTBF with just single testcell/device ... and had dropped back to "stand-alone" operation. Never fail operating system, met that testing for any number of testcells could go on concurrently. That significantly increased productivity ... from around the clock, scheduled, one-at-a-time testing ... to anytime, on-demand testing.
one of the tricks done in CKD disk i/o was "atomic" compare&swap sequence for shared-disk, loosely-coupled (aka cluster) operation. Earliest/largest such deployment was at the internal consolidated HONE datacenter in the bay area in the late 70s. if you use online sat. map, and search for the facebook address ... it was in the bldg next to (new) facebook ... has a different occupant now; at the time, the facebook bldg didn't exist).
previous convention had been to use the disk reserve/release i/o commands ... which is much more cumbersome than the compare&swap convention.
for other topic drift ... past posts mentioning charlie inventing
compare&swap instruction ... when doing fine-grain multiprocessor
locking work on cp67 (aka CAS was thought up, because they are charlie'
initials):
https://www.garlic.com/~lynn/subtopic.html#smp
360/65 (& 360/67 ... basically 360/65 with virtual memory hardware add-on) had external channels. However, there was still some amount of processor interferance with things like memory bus contention. when I was undergraduate ... I got to do a lot of rewrite of (virtual machine) cp67 system. One of the things was that standard cp67 came with 1052 & 2741 terminal support ... so I had to add tty/ascii support for the univ. terminals. As part of doing that ... I tried to make the standard mainframe terminal controller do something that it couldn't quite do. Somewhat as a result ... the univ. started a clone controller project (that would do the additional functions); reverse engineered 360 channel interface, built 360 channel interface board for interdata/3 and programmed interdata/3 to emulate mainframe terminal controller.
In the early testing ... there was situation that stopped the machine.
Memory bus was used by processor for instruction & data ... the "timer"
was also in real storage ... which required updating on every timer tic
(65/67 was approx. 13micoseconds) ... as well as channels for i/o
transfers. The machine would halt and signal an error if the timer
tic'ed and there was still a memory update pending from a prior tic
(which was happening because initially the memory bus was being held for
more than 13microseconds at a time). later there was writeup blaming
four of us for clone controller business
https://www.garlic.com/~lynn/submain.html#360pcm
... and there have been writeups that major motivation for the Future System effort ... was the clone controller business.
later writeups attribute clone processors being able to gain market foothold because Future System era resulted in 370 product pipelines going dry (370 efforts being killed off during FS era because it was assumed FS would replace everything).
misc. past posts in thread:
https://www.garlic.com/~lynn/2009s.html#18 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#20 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#22 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#23 Larrabee delayed: anyone know what's happening?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Fri, 25 Dec 2009 12:18:33 -0500Bill Todd <billtodd@metrocast.net> writes:
minor reference to doing compare&swap i/o sequence in CKD dasd as
alternative (late 70s)
https://www.garlic.com/~lynn/2009s.html#29 channels, was Larrabee delayed: anyone know what's happening?
early 70s ... the 3830/3330 disk controller had the "ACP" (airline
control program) RPQ ... which supported "logical" locking out in the
controller (somewhere I think there was reference to 2314 having
something similar earlier) ... misc. past posts:
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2008i.html#38 American Airlines
https://www.garlic.com/~lynn/2008i.html#39 American Airlines
https://www.garlic.com/~lynn/2008j.html#50 Another difference between platforms
old email on the subject (from Jim Gray to distribution):
https://www.garlic.com/~lynn/2008i.html#email800325
above mentions system/r (original relational/sql implementation)
... misc past posts
https://www.garlic.com/~lynn/submain.html#systemr
when we were doing ha/cmp & ha/cmp scale-up ... misc past posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
we worked with DBMS vendors that had done implementations for vax/cluster ... and supported vax/cluster lock semantics compatibility interface. however, some of the vendors had lists of things that (vax/cluster) had been done inefficiently (&/or had grown up over decade or so) ... which we had to avoid/fix (some of which related to implementation details based on feature/function provided by HSC)
as part of doing HA/CMP ... we were also called into for design walk-thrus for various RAID vendors ... looking for single-point-of-failures (it was interesting the number of vendors that would do the raid semantics for disk ... but completely overlook various single-point-of-failures in other places in the implementation).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Fri, 25 Dec 2009 12:45:32 -0500hmy1 <hmy1@aol.com> writes:
part of this was escon technology had been laying around pok, unannounced for possibly a decade (eventually used for mainframe channel paradigm was half-duplex, end-to-end serialization). one of the rs/6000 engineers took it, did some tweaking (which made it incompatible, full-duplex, etc). then he wanted to do a 800mbit version ... and we had been involved in the FCS activity ... and managed to talk him into getting involved with FCS instead. Then the pok channel engineers got involved in standardization activity (not so much on the basic standard ... but trying to overlay stuff on top of the underlying standard).
alternative example was the serial copper, full-duplex asynchronous stuff that Hursley did (Harrier/9333) ... with effectively packetized scsi commands. we spent some effort trying to get harrier morphed so that it interoperated with FCS ... but it turned into "SSA" instead.
referenced here ... in old post related to ha/cmp & ha/cmp scale-up
https://www.garlic.com/~lynn/95.html#13
recent posts in thread:
https://www.garlic.com/~lynn/2009s.html#18 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#20 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#22 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#23 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#29 channels, was Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#30 Larrabee delayed: anyone know what's happening?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Fri, 25 Dec 2009 16:38:25 -0500"Del Cecchi" <delcecchi@gmail.com> writes:
for instance ... there was this 16-way 370 SMP thing ... which everybody thought was just great. we had even convinced some of the processor development 3033 engineers to work on it in their spare time ... which is where I some of the blow-by-blow about 303x stuff. somewhere along the way ... somebody informed the head of POK that it might be decades before the POK favorite son operating system had 16-way support ... at which point some number of people were invited to not visit pok again (and the 3033 engineers directed to get their noses back to the grindstone).
i had little to do with rochester ... did do a lot with pok and san jose
... and was in austin for a time and did ha/cmp & some ha/cmp cluster
scale-up (before it got transferred and we were told we couldn't work on
anything with more than four processors). some related old email
https://www.garlic.com/~lynn/lhwemail.html#medusa
the referenced rs6000 engineer then shows up as "secretary" for FCS committee and managing the FCS standards documents.
there was some number of people that left IBM austin and joined dell ... and some of the austin area computer meetings were hosted at dell. one of the rochester ibm fellows that did work on s/38 showed up at DELL.
I believe I never even visited the rochester location.
from the days when my wife was in POK and charge of (mainframe)
loosely-coupled (cluster) architecture ... and responsible
for Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
she also was an inventor on IBM "ring" patent that I believe initially shows up as S/1 "chat ring". In any case, Peer-Coupled Shared Data saw very little uptake, except for IMS hot-standby ... until much later with sysplex ... big reason that she didn't stay long in the position.
One of the guys in rochester that I had some dealings with worked on
RCHVM (virtual machine) systems and VNET support ... and had some
dealings when I was doing HSDT (high-speed data transport) activity
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Most dealings that I had with rochester was that the made the chips for the rs/6000 serial link adapter (SLA ... aka the full-duplex tweaked escon ... 220mbits instead of 200mbits, etc). Since it was "proprietary" and didn't connect to anything else ... tried to figure out how to use it in "open system" environment. We talked NSC (HYPERchannel & later other stuff) into providing an SLA interface in their high-speed router ... however we had to make the chips available to them first. Turns out that we couldn't just send chips from Rochester to Minneapolis ... they had to be transferred from rochester to austin (at a 300% mark-up) and then austin had to transfer them from austin to minneapolis (at another 300% markup ... now 900% markup). This was for something that they thought we should be paying them for doing, as a favor for us.
i was involved in some early 801 iliad & romp stuff
https://www.garlic.com/~lynn/lhwemail.html#801
the precursor to rs/6000 was the pc/rt. the pc/rt originally started out
as ROMP processor with pl8 and cpr for follow-on to displaywriter (aka
Austin was office product division ... OPD). When that was canceled,
they looked around and settled on selling it into the unix workstation
market instead. They got the company that had done the AT&T unix port to
the ibm/pc (for PC/IX) to do one for ROMP (aka pc/rt). misc. other posts
mentioning 801, romp, rios, iliad, etc
https://www.garlic.com/~lynn/subtopic.html#801
Date: 01/22/80 14:21:40
To: wheeler (as well as others)
From: <corporate NETMAP>
Greetings,
There are 2 new nodes that should be added to the network:
BCRCPS which is a 148 VM system located in Boca Raton connected to
BCR68A via a 4800 line.
RCHVM1 which is a 158AP VM system located in Rochester, Minn
connected to RCH648 via a 9.6 line
... snip ... top of post, old email index.
The internal network was larger than the arpanet/internet from just
about the beginning until sometime late '85 or early '86 ... some
past posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
I did an edit macro that would take "structured" network notifications
and automatically do the correct thing ... post with some of the
structured notifications in '83 (when arpanet moved to internetworking
protocol and started to exceed 255 nodes ... and the internal network
exceeded 1000 nodes)
https://www.garlic.com/~lynn/2006k.html#8
as well as list of corporate locations that had new/additional
nodes added during 1983.
Date: 02/21/80 06:23:27
To: "world"
From: <somebody in rochester>
Greetings !
Its been a L-O-N-G time, but I finally made it ! With this VMSG
I am announcing that I am again back on the NETWORK at a new node
in Rochester, Mn (GSD) named 'RCHVM1'. My userid is 'xxxxxx' (as
in Boulder). Please change all nickname files to reflect this
change.
Its good to be "home".
... snip ... top of post, old email index.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Fri, 25 Dec 2009 17:08:12 -0500... addenda ... in 1980 ... i wrote a bunch of mainframe software to support (NSC) HYPERchannel use at corporate internal datacenters ... and then tried to have a joint release (with NSC) of the software to customers (NSC eventually had to recreate the stuff from scratch).
it appeared that major objection preventing me from releasing that software ... was from the channel people in POK trying to get their fiber technology out as ESCON (viewing hyperchannel as competition) ... some possibly later involved in all the gorp of overlaying FICON on top of FCS.
one of the disk engineers in san jose that I worked some with ... got '78 patent on raid technology (predates "RAID" by nearly a decade).
However, I believe S/38 had the first ship of raid technology. S/38 treated all the available disks as one large pool ... simplifying a lot of things ... common failure mode was single disk failure ... which in common pool design, brought down the whole system. RAID was needed to mask such single disk failures ... (from taking down the whole infrastructure). However, s/38 then required full infrastructure backup ... and full infrastructure restore ... to handle other kinds of failure/recovery. s/38 configurations were somewhat notorious for taking a long time for getting something back up and running ... since complete restore could take a long time.
misc. past posts in this thread:
https://www.garlic.com/~lynn/2009s.html#18 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#20 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#22 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#23 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#29 channels, was Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#30 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#31 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Sat, 26 Dec 2009 15:58:23 -0500Robert Myers <rbmyersusa@gmail.com> writes:
early in the introduction of PC ... something that contributing significantly to early uptake was 3270 terminal emulation ... basically a corporation that already business justified tens of thousands of 3270s, could get a PC for about the same price, and in single desktop footprint, do both 3270 to mainframe operation as well as some local computing (almost no brainer business justification ... more function for same price as something that was already justified).
moving later into the decade ... the communication group had large terminal emulation install base that it was attempting to protect ... however the technology was moving on ... and the terminal emulation paradigm was becoming a major bottleneck between all the desktops and the datacenter. as a result ... data was leaking out of the datacenter at an alarming rate ... significantly driving commodity desktop and server disk market.
the disk division had attempted to bring a number of products to market
that would have provided channel-speed like thruput and a lot more
function between the desktops and the datacenter (attempting to maintain
role for the datacenter in modern distributed environment) ... but was
constantly blocked by the communcation business unit (attempting to
preserve the terminal emulation install base). misc. past posts
mentioning terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation
this is somewhat related to earlier battles that my wife had with the
communication group when she was con'ed into going to POK (center of
high-end mainframe) to be in charge of loosely-coupled architecture.
She was constantly battling with the communication group over using
their terminal-oriented products for high-speed multiple processor
operation. They would have temporary truce where she would be allowed to
use whatever she wanted within the walls of the datacenter ... but the
communication group's terminal-oriented products had to be used for
anything that crossed the datacenter walls. misc. past posts
mentioning my wife doing stint in POK in charge of loosely-coupled
architecture
https://www.garlic.com/~lynn/submain.html#shareddata
... anyway ... and so it came to pass ... san jose disk division is long gone.
past posts in this thread:
https://www.garlic.com/~lynn/2009s.html#18 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#20 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#22 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#23 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#29 channels, was Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#30 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#31 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#33 Larrabee delayed: anyone know what's happening?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch Date: Sat, 26 Dec 2009 18:37:33 -0500Robert Myers <rbmyersusa@gmail.com> writes:
businesses didn't mind so much that business critical data was traveling out to somebody's desktop for use in spreadsheet (as long as it was possibly all on premise on non-authorized people couldn't evesdrop) ... it was when it disappeared from the datacenter to reside on somebody's desktop ... which then experienced some desktop glitch ... and found it wasn't backed up ... and the business found itself w/o some major critical piece of business operational data (putting the business at risk).
in the mid-90s there was some study that half of the business that lost disk with unbacked up business critical data, filed for bankruptcy within 30 days.
business critical datacenters tended to have little things like (at least) daily backups ... along with disaster recovery plans ... contingencies to keep the business running (that had become critically dependent on dataprocessing processes)
when we were doing ha/cmp ... some past posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
I coined the terms disaster survivability and geographic
survivability (to differentiate from simple disaster/recovery)
... some past posts
https://www.garlic.com/~lynn/submain.html#available
also in that period ... i was asked to write a section for the corporate continuous availability strategy document. unfortuantely, both Rocherster and POK objected to the section (at the time, they couldn't meet the implementation description) ... and it got pulled.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch,alt.folklore.computers Date: Sun, 27 Dec 2009 01:23:55 -0500re:
The communication group had other mechanisms besides outright opposition. At one point the disk division had pushed thru corporate approval for a kind of distributed envirionment product ... and the communication group changed tactics (from outright opposition) to claiming that the communication group had corporate strategic responsibility for selling such products. The product then had price increase of nearly ten times (compared to what the disk division had been planning on selling it for).
The other problem with the product was that the shipped mainframe
support only got about 44kbytes/sec thruput while using up a 3090
processor(/cpu). I did the enhancements that added RFC1044 to the
product and in some tuning tests at cray research got 1mbyte/sec
thruput while using only modest amount of 4341 processor (an
improvement of approx. 500 times in terms of instruction executed per
byte moved) ... tuning tests were memorable in other ways ... trip
was NW flt to Minneapolis left ground 20 minutes late ... however it
was still wheels up out of SFO five minutes before the earthquake
hit. misc. past posts mentioning rfc1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
also slightly related:
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
slight digression, mainframe product had done tcp/ip protocol stack in
vs/pascal. It had none of the common buffer related exploits that are
common in C language implementations. It wasn't that it was impossible
to make such errors in pascal ... it was that it was nearly as hard
to make such errors as it is hard not to make such errors in C. misc.
past posts
https://www.garlic.com/~lynn/subintegrity.html#overflow
In the time-frame of doing rfc 1044 support was also getting involved
in HIPPI standards and what was to becomes FCS standards ... at the
same time as trying to figure out what to do about SLA when rs/6000
shipped. ESCON was the mainframe varient that ran 200mbits/sec ... but
got only about 17mbytes/sec aggregate thruput, minor reference:
http://www-01.ibm.com/software/htp/tpf/tpfug/tgs03/tgs03l.txt
RS/6000 SLA was tweaked to 220mibts/sec ... and was looking at significantly better than 17mbytes/sec sustained but also full-duplex, in each direction (not aggregate, in each direction ... in large part because it wasn't simulating half-duplex with the end-to-end synchronous latencies).
also, while the communication group was doing things like trying to
shutdown things like client/server (as part of preserving the terminal
emulation install base), we had come up with 3-tier architecture and
were out pitching it to customer executives (and taking more than a
few barbs from the communication group) ... misc. past post mentioning
3-tier
https://www.garlic.com/~lynn/subnetwork.html#3tier
also these old posts with references to the (earlier) period ... with
pieces from '88 3-tier marketing pitch
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/96.html#17 middle layer
this is reference to jan92 meeting looking at part of ha/cmp scale-up
(commercial & database as opposed to numerical intensive) & FCS
https://www.garlic.com/~lynn/95.html#13
where FCS is looking better than 100mbyte/sec full-duplex (i.e.
100mbyte/sec in each direction). for other drift ... some old
email more related to ha/cmp scale-up for numerical intensive and
some other national labs issues:
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
now part of client/server ... two of the people mentioned in the jan92 meeting reference ... later left and show up at small client/server startup responsible for something called "commerce server" (we had also left in part because the ha/cmp scale-up had been transferred and we were told we weren't to work on anything with more than four processors) ... and we were brought in as consultants because they wanted to do payment transactions. The startup had also invented this technology called "SSL" that they wanted to use ... and the result is now frequently called "electronic commerce".
Part of this "electronic commerce" thing was something called a
"payment gateway" (which we periodically claim was the original
"SOA") ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
which required a lot of availability ... taking payment transactions
from webservers potentially all over the world; for part of the
configuration we used rs/6000 ha/cmp configurations.
https://www.garlic.com/~lynn/subtopic.html#hacmp
in any case ... one of the latest buzz is "cloud computing" ... which
appears trying to (at least) move all the data back into a datacenter
... with some resemblance to old-time commercial time-sharing ... for
other drift, misc. past posts mentioning (mainframe) virtual machine
based commercial time-sharing service bureaus starting in the late 60s
and going at least into the mid-80s
https://www.garlic.com/~lynn/submain.html#timeshare
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sun, 27 Dec 2009 10:29:07 -0500"Dave Wade" <g8mqw@yahoo.com> writes:
cp67 (& later vm370) convention was that base assembler file had its statements sequenced by 1000 in cols 73-80 (and source having "ISEQ 73,80").
cp67 CMS "UPDATE" command used source sequence numbers (in base/source
file) for what to replace/insert/delete ... aka
./ R nnnnn< nnnnn>
./ I nnnnn
./ D nnnnn< nnnnn>
however, the sequence numbers in the replaced/inserted records had to
have the serial numbers manually typed (CMS edit had command that would
reserialize whole source ... specifying starting number and increment
number).
I was doing so much changes as undergraduate ... I did the "$"
convention ... that would automatically generate the sequence numbers
for replace/insert records. recent reference:
https://www.garlic.com/~lynn/2009s.html#17 old email
later editors would automatically generate/save an "update" file based on edit source changes.
some past posts mentioning UPDATE command dot/slach statements
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
https://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006m.html#44 Musings on a holiday weekend
https://www.garlic.com/~lynn/2006n.html#45 sorting
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2006w.html#48 vmshare
https://www.garlic.com/~lynn/2007e.html#59 FBA rant
https://www.garlic.com/~lynn/2008r.html#38 "True" story of the birth of the IBM PC
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Date: Sun, 27 Dec 2009 10:56:29 -0500 Subject: old modems MailingList: hillgang IIA Brief History of Modems
wiki modem page:
https://en.wikipedia.org/wiki/Modem
from above:
In December 1972, Vadic introduced the VA3400. This device was
remarkable because it provided full duplex operation at 1,200 bit/s
over the dial network, using methods similar to those of the 103A in
that it used different frequency bands for transmit and receive. In
November 1976, AT&T introduced the 212A modem to compete with
Vadic. It was similar in design to Vadic's model, but used the lower
frequency set for transmission.
... snip ...
in the 80s, one of the things I was doing was HSDT (high-speed data
transport) project
https://www.garlic.com/~lynn/subnetwork.html#hsdt
I was having trouble getting modems for greater than T1
(~1.5mbits/sec) was something of the pain. Also, since links were used
for some corporate internal network traffic ... corporate had
requirement that all links had to be encrypted (in the mid-80s, there
was comment that internal network had over half of all the link
encryptors in the world). Recent mention getting involved in designing
encryptor that was significantly faster, cheaper, stronger:
https://www.garlic.com/~lynn/2009l.html#14 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009p.html#32 Getting Out Hard Drive in Real Old Computer
also ... tcp/ip is the technical basis for the modern internet ... but
NSFNET backbone was the operational basis for the modern internet (and
CIX was the business basis for the modern internet). misc. old email
mentioning NSFNET backbone related stuff
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
internal politics prevented us from actually doing something for the
NSFNET backbone. The director of NSF thot that he might be able to
help by writing a letter to the corporation (requesting our
participation) ... copying the CEO ... but that just made the internal
politics worse. misc. past posts mentioning NSFNET backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 27 Dec, 2009 Subject: Six Months Later, MasterCard Softens a Controversial PCI Rule Blog: Payment Systems NetworkSix Months Later, MasterCard Softens a Controversial PCI Rule
from above:
That policy generated many complaints from Level 2 merchants, who
security experts say would have to pay anywhere from $100,000 to $1
million for a QSA's services. MasterCard's policy also
diverged from Visa Inc.'s, which lets Level 2 merchants do
... snip ...
There was news article earlier about $4b cost ... but that may have
been just the bill for conforming crypto ... not yearly costs
associated with compliance audits.
https://www.garlic.com/~lynn/2009j.html#26 Price Tag for End-to-End Encryption: $4.8 Billion, Mercator Says
https://www.garlic.com/~lynn/2009j.html#29 Price Tag for End-to-End Encryption: $4.8 Billion, Mercator Says
https://www.garlic.com/~lynn/2009j.html#58 Price Tag for End-to-End Encryption: $4.8 Billion, Mercator Says
we had been brought in by small client/server startup to do payment transactions for something now frequently referred to as "electronic commerce" ... somewhat as a result, in the mid-90s, we were asked to participate in the x9a10 financial standard working group ... which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. part of x9a10 was detailed end-to-end threat & vulnerability studies of the various environments. there were some metaphors to characterize the current infrastructure:
dual-use metaphor; information used by crooks to perform fraudulent transactions ... is also required as part of scores of business processes going on at millions of locations around the world ... as part of normal business. as a result, we've frequently commented that even if the planet was buried under miles of (information hiding) crypto ... it still wouldn't prevent information leakage.
security proportional to risk metaphor; the value of the information to many merchants is possibly a couple dollars (profit from the transaction) ... and value to processors is a couple cents (profit from each transactions); in contrast the same information is worth 100 to 1000 times more to the crooks. As a result the crooks can possibly outspend the merchants & processors by a factor of 100 times (attacking the system ... as merchants & processors have to spend defending the system).
as a result, one of the things that x9a10 financial standard working
group did in the x9.59 financial transaction standard ... was slightly
tweak the paradigm and eliminate the usefulness of the information to
the crooks; did nothing to prevent the crooks being able to steal the
information ... just eliminated crooks being able to use the stolen
information to perform fraudulent transactions (and therefor their
motivation for stealing the information)
https://www.garlic.com/~lynn/x959.html#x959
then there is this
IBM touts encryption innovation; New technology performs calculations
on encrypted data without decrypting it
http://www.computerworld.com/s/article/9134823/IBM_touts_encryption_innovation
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sun, 27 Dec 2009 14:45:52 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
i considered what I did in HASP (implementing cms editor syntax along
with 2741 & tty terminal support ) much better than what came along
later in TSO ... (other) recent posts in hillgang mailing list
https://www.garlic.com/~lynn/2009r.html#63 tty
https://www.garlic.com/~lynn/2009s.html#0 tty
current xedit uses "trunc=nn" to specify how many columns to display
http://publib.boulder.ibm.com/infocenter/zvm/v5r4/topic/com.ibm.zvm.v54.hcpl0/hcsx0b3019.htm
when I added tty support to cp67 ... tty33 regularly truncated (or wrapped) at col. 72.
doing some search engine ... turned up this unrelated reference:
http://csg.uwaterloo.ca/sdtp/watscr.html
from above:
Waterloo SCRIPT is a rewritten and extended version of a processor
called NSCRIPT that had been converted to OS and TSO from CP-67/CMS
SCRIPT. The original NSCRIPT package is available from the SHARE Program
Library. Waterloo obtained NSCRIPT in late 1974 as a viable alternative
to extending ATS to meet local requirements. The local acceptance of
Waterloo SCRIPT has continued to provide the motivation for additional
on-going development.
... snip ...
and the above was used at cern ... leading up to creation of HTML
when we moved a couple years ago ... lots of stuff (including old
manuals) went into storage locker ... so it takes a lot more effort to
dig out the old cp67 documentation; however bitsaver has one of the
manuals:
http://www.bitsavers.org/pdf/ibm/360/cp67/
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why Coder Pay Isn't Proportional To Productivity Newsgroups: alt.folklore.computers Date: Sun, 27 Dec 2009 15:16:15 -0500"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
once they are in management ... then it is necessary to depreciate the value of everybody still programming ... preferrably having large numbers (on theory management value proportional to size of organization) of (low-value) interchangeable people (no skill required).
another view ... would be Boyd's to be or to do ... some recent
references:
https://www.garlic.com/~lynn/2009b.html#25 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009h.html#5 mainframe replacement (Z/Journal Does it Again)
https://www.garlic.com/~lynn/2009o.html#47 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009p.html#34 big iron mainframe vs. x86 servers
https://www.garlic.com/~lynn/2009p.html#60 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009q.html#37 The 50th Anniversary of the Legendary IBM 1401
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 28 Dec 2009 09:52:38 -0500Carlie Coats <carlie@jyarborough.com> writes:
some x-over between ncar (national center for atomospherric research), at ucar (university corporation for atmospheric research), and mesa (table mesa drive).
in the early 90s, congress passed some legislation that relaxed some anti-trust provisions and provided for some other stuff ... that was to promote commercial transfer of gov. technology ... with the objective of improving US competitive position in the world. this shows up (at least) in commercialization of various stuff from national labs ... including various kinds of storage management; LANL ... datatree; LLNL ... unitree, and NCAR (SAN mentioned in previous post) ... Mesa Archival.
We were actively involved in the unitree effort and also got asked to do
some stuff with the Mesa Archival effort by people at NCAR. Part of it
was the san jose disk division was investing/funding Mesa Archival
activity ... and we were asked to go by Mesa Archival to see how things
were going &/or provide help. This was somewhat to sidestep some of the
internal politics that happened with more direct activity.
https://www.garlic.com/~lynn/2009s.html#34 Larrabee delayed: anyone know what's happening?
After we left in '92 ... we did various consulting activities ... like
the stuff for small client/server startup that is now frequently called
"electronic commerce" ... and for Steve Chen when he was CTO at Sequent.
There was also guy at LLNL that was trying to "commercialize" various
LLNL technologies ... one was trying to move some LLNL chip technology
into commercial smartcard world. Part of that was using the anti-trust
relaxation for commercial consortium and the formation of FSTC
... current FSTC (even tho there appears to be little current LLNL
activity)
http://www.fstc.org/
even the LLNL FSTC webpage at wayback machine says the page
has moved and then redirects to above URL
https://web.archive.org/web/*/http://www.llnl.gov/fstc
misc. past posts mentioning Mesa Archival:
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002q.html#23 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003i.html#53 A Dark Day
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#15 Device and channel
https://www.garlic.com/~lynn/2005e.html#16 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2006n.html#29 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2006u.html#27 Why so little parallelism?
https://www.garlic.com/~lynn/2007j.html#47 IBM Unionization
https://www.garlic.com/~lynn/2008b.html#58 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2008p.html#51 Barbless
https://www.garlic.com/~lynn/2009k.html#58 Disksize history question
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: alt.folklore.computers Date: Mon, 28 Dec 2009 10:35:09 -0500Anne & Lynn Wheeler <lynn@garlic.com> writes:
part of the above was 1990 census showed marked decline in education level (especially science & math) of citizens ... and there was various efforts to counter the downward spiral. science&technology was becoming major US (& world) economic driver ... and US citizens weren't keeping up. one of the reports was that half the 18yr olds were "functionally illiterate". Others had half of the advanced science/technology/math degrees (from US educational institutions) were going to foreigners ... and that the US economy was increasingly being propped up by foreigners.
misc. past posts mentioning "functionally illiterate"
https://www.garlic.com/~lynn/2002k.html#45 How will current AI/robot stories play when AIs are real?
https://www.garlic.com/~lynn/2003i.html#28 Offshore IT
https://www.garlic.com/~lynn/2003i.html#45 Offshore IT
https://www.garlic.com/~lynn/2003i.html#55 Offshore IT
https://www.garlic.com/~lynn/2003p.html#33 [IBM-MAIN] NY Times editorial on white collar jobs going
https://www.garlic.com/~lynn/2004b.html#42 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004d.html#18 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004h.html#18 Low Bar for High School Students Threatens Tech Sector
https://www.garlic.com/~lynn/2005e.html#48 Mozilla v Firefox
https://www.garlic.com/~lynn/2005g.html#43 Academic priorities
https://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#63 DEC's Hudson fab
https://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007i.html#24 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#79 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007j.html#31 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#51 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#80 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#85 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#10 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#30 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#34 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#42 IBM Unionization
https://www.garlic.com/~lynn/2007n.html#68 Poll: oldest computer thing you still use
https://www.garlic.com/~lynn/2007o.html#21 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007o.html#22 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007o.html#31 EZPass: Yes, Big Brother IS Watching You!
https://www.garlic.com/~lynn/2007v.html#29 folklore indeed
https://www.garlic.com/~lynn/2008.html#39 competitiveness
https://www.garlic.com/~lynn/2008k.html#5 Republican accomplishments and Hoover
https://www.garlic.com/~lynn/2008q.html#55 Can outsourcing be stopped?
https://www.garlic.com/~lynn/2008s.html#20 Five great technological revolutions
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PCI and Network Encryption Newsgroups: bit.listserv.ibm-main Date: Mon, 28 Dec 2009 13:36:10 -0500HMerritt@JACKHENRY.COM (Hal Merritt) writes:
then with regard to the comments in the above about even if the planet was buried under miles of (information hiding) encryption (aka information is trivially used by crooks for fraudulent transactions at the same time the information is required in scores of business processes located at millions of locations around the planet):
IBM touts encryption innovation; New technology performs calculations on
encrypted data without decrypting it
http://www.computerworld.com/s/article/9134823/IBM_touts_encryption_innovation
now if IBM encryption could encrypt the account number on the payment card ... so that it is NEVER exposed, even at point-of-sale.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 28 Dec, 2009 Subject: Audits VII: the future of the Audit is in your hands Blog: Financial CryptographyAudits VII: the future of the Audit is in your hands
I've mentioned several times being at a european ceo/executive financial & exchange conference several years ago and in session on spreading issues with sarbanes-oxley ... that the audits just catch mistakes ... it has no way of catching determined fraud (at least the audit part, there is the whistle-blower section in the bill).
One of the suggestions was verify financial transaction claims in any corporation audit ... against corresponding information in other corporation audits (independent verification of the information). The claim was that the current public company audit infrastructure has no mechanism to implement such a thing ... since each individual company pays for the auditing of just their books (no verification against independent sources).
part of this is motto trust, but verify ... from DTRA (a relative
spent a decade at dtra ... in treaty compliance):
http://www.dtra.mil/
misc. past posts mentioning sarbanes-oxley:
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#58 Sarbanes-Oxley
https://www.garlic.com/~lynn/2006i.html#1 Sarbanes-Oxley
https://www.garlic.com/~lynn/2006j.html#28 Password Complexity
https://www.garlic.com/~lynn/2006o.html#35 the personal data theft pandemic continues
https://www.garlic.com/~lynn/2006u.html#22 AOS: The next big thing in data storage
https://www.garlic.com/~lynn/2007b.html#63 Is Silicon Valley strangeled by SOX?
https://www.garlic.com/~lynn/2007j.html#0 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007j.html#74 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#75 IBM Unionization
https://www.garlic.com/~lynn/2007o.html#0 The Unexpected Fact about the First Computer Programmer
https://www.garlic.com/~lynn/2007r.html#61 The new urgency to fix online privacy
https://www.garlic.com/~lynn/2008.html#71 As Expected, Ford Falls From 2nd Place in U.S. Sales
https://www.garlic.com/~lynn/2008.html#78 As Expected, Ford Falls From 2nd Place in U.S. Sales
https://www.garlic.com/~lynn/2008g.html#17 Hannaford breach illustrates dangerous compliance mentality
https://www.garlic.com/~lynn/2008n.html#0 Blinkylights
https://www.garlic.com/~lynn/2008n.html#2 Blinkylights
https://www.garlic.com/~lynn/2008n.html#72 Why was Sarbanes-Oxley not good enough to sent alarms to the regulators about the situation arising today?
https://www.garlic.com/~lynn/2008n.html#74 Why can't we analyze the risks involved in mortgage-backed securities?
https://www.garlic.com/~lynn/2008n.html#80 Why did Sox not prevent this financal crisis?
https://www.garlic.com/~lynn/2008o.html#26 SOX (Sarbanes-Oxley Act), is this really followed and worthful considering current Financial Crisis?
https://www.garlic.com/~lynn/2008o.html#68 Blinkenlights
https://www.garlic.com/~lynn/2008o.html#71 Why is sub-prime crisis of America called the sub-prime crisis?
https://www.garlic.com/~lynn/2008o.html#75 In light of the recent financial crisis, did Sarbanes-Oxley fail to work?
https://www.garlic.com/~lynn/2008p.html#8 Global Melt Down
https://www.garlic.com/~lynn/2008q.html#19 Collateralized debt obligations (CDOs)
https://www.garlic.com/~lynn/2008q.html#58 Obama, ACORN, subprimes (Re: Spiders)
https://www.garlic.com/~lynn/2008s.html#8 Top financial firms of US are eyeing on bailout. It implies to me that their "Risk Management Department's" assessment was way below expectations
https://www.garlic.com/~lynn/2008s.html#9 Blind-sided, again. Why?
https://www.garlic.com/~lynn/2008s.html#20 Five great technological revolutions
https://www.garlic.com/~lynn/2008s.html#24 Garbage in, garbage out trampled by Moore's law
https://www.garlic.com/~lynn/2008s.html#28 Garbage in, garbage out trampled by Moore's law
https://www.garlic.com/~lynn/2008s.html#30 How reliable are the credit rating companies? Who is over seeing them?
https://www.garlic.com/~lynn/2009.html#15 What are the challenges in risk analytics post financial crisis?
https://www.garlic.com/~lynn/2009.html#52 The Credit Crunch: Why it happened?
https://www.garlic.com/~lynn/2009.html#53 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2009.html#57 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2009.html#73 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2009b.html#36 A great article was posted in another BI group: "To H*** with Business Intelligence: 40 Percent of Execs Trust Gut"
https://www.garlic.com/~lynn/2009b.html#37 A great article was posted in another BI group: "To H*** with Business Intelligence: 40 Percent of Execs Trust Gut"
https://www.garlic.com/~lynn/2009b.html#48 The blame game is on : A blow to the Audit/Accounting Industry or a lesson learned ???
https://www.garlic.com/~lynn/2009b.html#49 US disaster, debts and bad financial management
https://www.garlic.com/~lynn/2009b.html#52 What has the Global Financial Crisis taught the Nations, it's Governments and Decision Makers, and how should they apply that knowledge to manage risks differently in the future?
https://www.garlic.com/~lynn/2009b.html#53 Credit & Risk Management ... go Simple ?
https://www.garlic.com/~lynn/2009b.html#54 In your opinion, which facts caused the global crise situation?
https://www.garlic.com/~lynn/2009b.html#57 Credit & Risk Management ... go Simple ?
https://www.garlic.com/~lynn/2009b.html#59 As bonuses...why breed greed, when others are in dire need?
https://www.garlic.com/~lynn/2009b.html#73 What can we learn from the meltdown?
https://www.garlic.com/~lynn/2009b.html#80 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#0 Audit II: Two more scary words: Sarbanes-Oxley
https://www.garlic.com/~lynn/2009c.html#1 Audit II: Two more scary words: Sarbanes-Oxley
https://www.garlic.com/~lynn/2009c.html#3 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#20 Decision Making or Instinctive Steering?
https://www.garlic.com/~lynn/2009c.html#29 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#44 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009d.html#0 PNC Financial to pay CEO $3 million stock bonus
https://www.garlic.com/~lynn/2009d.html#3 Congress Set to Approve Pay Cap of $500,000
https://www.garlic.com/~lynn/2009d.html#10 Who will Survive AIG or Derivative Counterparty Risk?
https://www.garlic.com/~lynn/2009d.html#22 Is it time to put banking executives on trial?
https://www.garlic.com/~lynn/2009d.html#37 NEW SEC (Enforcement) MANUAL, A welcome addition
https://www.garlic.com/~lynn/2009d.html#42 Bernard Madoff Is Jailed After Pleading Guilty -- are there more "Madoff's" out there?
https://www.garlic.com/~lynn/2009d.html#61 Quiz: Evaluate your level of Spreadsheet risk
https://www.garlic.com/~lynn/2009d.html#62 Is Wall Street World's Largest Ponzi Scheme where Madoff is Just a Poster Child?
https://www.garlic.com/~lynn/2009d.html#63 Do bonuses foster unethical conduct?
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009d.html#75 Whistleblowing and reporting fraud
https://www.garlic.com/~lynn/2009e.html#0 What is swap in the financial market?
https://www.garlic.com/~lynn/2009e.html#13 Should we fear and hate derivatives?
https://www.garlic.com/~lynn/2009e.html#35 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#36 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
https://www.garlic.com/~lynn/2009f.html#2 CEO pay sinks - Wall Street Journal/Hay Group survey results just released
https://www.garlic.com/~lynn/2009f.html#29 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#51 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009g.html#7 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#33 Treating the Web As an Archive
https://www.garlic.com/~lynn/2009h.html#17 REGULATOR ROLE IN THE LIGHT OF RECENT FINANCIAL SCANDALS
https://www.garlic.com/~lynn/2009i.html#60 In the USA "financial regulator seeks power to curb excess speculation."
https://www.garlic.com/~lynn/2009j.html#12 IBM identity manager goes big on role control
https://www.garlic.com/~lynn/2009j.html#30 An Amazing Document On Madoff Said To Have Been Sent To SEC In 2005
https://www.garlic.com/~lynn/2009m.html#89 Audits V: Why did this happen to us ;-(
https://www.garlic.com/~lynn/2009n.html#17 UK issues Turning apology (and about time, too)
https://www.garlic.com/~lynn/2009n.html#20 UK issues Turning apology (and about time, too)
https://www.garlic.com/~lynn/2009o.html#71 "Rat Your Boss" or "Rats to Riches," the New SEC
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Tue, 29 Dec 2009 09:26:30 -0500"Esra Sdrawkcab" <admin@127.0.0.1> writes:
One such marriage was VM370 performance products and ISPF ... VM performance products had 3 people supporting (and once in this marriage didn't get any more) and ISPF (both earned about the same revenue but ISPF group had enormous number of people).
There were some number of other such (corporate product) "marriages" ... especially between various VM370 products ... that had been originally done with one or very few number of people and various MVS products that had been done with large hordes (that was more traditional mainstream corporate approach)
slightly related recent thread:
https://www.garlic.com/~lynn/2009s.html#16 Why Coder Pay Isn't Proportional To Productivity
https://www.garlic.com/~lynn/2009s.html#24 Why Coder Pay Isn't Proportional To Productivity
https://www.garlic.com/~lynn/2009s.html#28 Why Coder Pay Isn't Proportional To Productivity
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 29 Dec, 2009 Subject: Audits VII: the future of the Audit is in your hands Blog: Financial Cryptographyre:
the comments were that current paradigm didn't easily promote the independent verification of every audited transaction because
1) possible conflict of interest ... since the auditing agency was being paid for by the organization that it was auditing
2) lots of the information about every transaction was available in audits of other public companies ... but because of the lack of independent audit process ... there was no obvious way of cross-checking all transactions across all audits
There was something analogous in lack of transparency and visibility in other related activities.
1) supposedly the information about illegal naked short sales transactions is available at DTC (or since merged with NSCC, DTCC) ... which DTCC is refusing to release. There are press items about DTCC being sued to make that information available
2) in year ago congressional hearings into the current financial crisis ... one of the critical components in the transactions resulting in the current financial mess were the rating agencies. The claim was that the seeds for that part of the mess was laid in the early 70s when the rating agencies changed from the buyers paying from the ratings to the sellers paying for the ratings (opening things up for conflict of interest).
Disclaimer: some of the (virtual machine based) online timesharing service bureaus from the early 70s quickly moved up the value chain to financial information. One of them is listed as buying the "Pricing Services" division from one of the rating agencies in the period of changing from buyers paying for the ratings to the sellers paying for the ratings. I had interviewed with them in the late 60s and stayed in touch with some of the people over the years.
In the more recent congressional hearings into the Madoff Ponzi scheme ... it was claimed that tips (52percent) turn up 13 times more fraud than audits (4percent) ... and that while the SEC didn't have a "tip" phone line ... they did have a 1-800 number for corporations to complain about investigations (some people pointed out that SOX had almost inverted its focus on what turns up the most fraud and what turns up the least fraud ... there is enormous mismatch when considering the cost of audit vis-a-vis the amount of fraud it turns up)
It was also stated in the Madoff hearings that transparency and visibility was much more important than new legislation.
Disclaimer: somewhat as result of having participated in the x9.59 transaction standard in x9a10 financial standard working group, in the late 90s, we were asked into NSCC (hadn't yet merged with DTC) to look at defining standard that improved security for all trades. Not very far into the effort, the work was suspended; a side-effort of changes for improving the security on all trades would have also significantly improved visibility and transparency ... something which apparently is not part of the trading culture.
somewhat related recent post in (linkedin) payment systems:
https://www.garlic.com/~lynn/2009s.html#39 Six Months Later, MasterCard Softens a Controversial PCI Rule
https://www.garlic.com/~lynn/2009s.html#44 PCI and Network Encryption
As referred to in the above, the countermeasures and the audits ... are enormously more expensive ... as well as the cost of the activities compared to the benefits.
This also gets into past naked transaction metaphor discussions
that went on here ... some of my posts archived here:
https://www.garlic.com/~lynn/subintegrity.html#payments
also
http://financialcryptography.com/mt/archives/000745.html
http://financialcryptography.com/mt/archives/000744.html
http://financialcryptography.com/mt/archives/000747.html
http://financialcryptography.com/mt/archives/000749.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Larrabee delayed: anyone know what's happening? Newsgroups: alt.folklore.computers Date: Tue, 29 Dec 2009 11:18:27 -0500jmfbahciv <jmfbahciv@aol> writes:
there appears to also be an ethics issue ... reports of big uptic in cheating ... promoting culture that it is easier to cheat than to actually do the work.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@garlic.com (Lynn Wheeler) Date: 29 Dec, 2009 Subject: Six Months Later, MasterCard Softens a Controversial PCI Rule Blog: Payment Systems Networkre:
I had managed to add a snide comment in the ibm mainframe mailing list
on the subject (ibm mainframe mailing list originated on bitnet in the
80s):
https://www.garlic.com/~lynn/2009s.html#44
article also managed to generate a piece in the financial cryptography
blog ....
http://financialcryptography.com/mt/archives/001220.html
part of a series of items about audits; a couple earlier pieces:
http://financialcryptography.com/mt/archives/001131.html
http://financialcryptography.com/mt/archives/001218.html
The trillion times is a heck of lot more bloat than I was able to show
when there were attempts in the mid-90s to add digital certificate
processing to payment transactions ... that bloat only increased both
payment transaction payload size and payment transaction computational
processing by just two orders of magnitude (100 times)
https://www.garlic.com/~lynn/subpubkey.html#bloat
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Daylight Savings Time again Newsgroups: alt.folklore.computers Date: Tue, 29 Dec 2009 15:40:54 -0500Eric Chomko <pne.chomko@comcast.net> writes:
there was article someplace that the previous administration allowed oil industry lawyer/lobbiest to markup administration position papers on energy policy and things like global warming. nasa scientist called press conference showing one of his papers modified by such a person.
during quicky search engine turns up some reference:
http://www.nytimes.com/2006/01/29/science/earth/29climate.html
but there was some article showing a returned paper that had handwritten
notes about the required modifications. wiki page makes some reference
to subject (but not the specific paper):
https://en.wikipedia.org/wiki/Global_warming_controversy
from above:
The groups presented a survey that shows two in five of the 279 climate
scientists who responded to a questionnaire complained that some of
their scientific papers had been edited in a way that changed their
meaning. Nearly half of the 279 said in response to another question
that at some point they had been told to delete reference to "global
warming" or "climate change" from a report.
... snip ...
other stuff here:
http://www.rollingstone.com/politics/story/15148655/the_secret_campaign_of_president_bushs_administration_to_deny_global_warming/print
and here:
http://healthandenergy.com/global_warming.htm
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Tue, 29 Dec 2009 23:04:39 -0500glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
This is part of presentation that I was giving ... including
at SNA architecture review board in Raleigh
https://www.garlic.com/~lynn/99.html#67
and one of the more naive people in the Raleigh audience asked how could an organization less than 1/10 the size of the NCP/pu4 (aka 37xx) Raliegh group turn out something with so much more feature/function.
after the meeting ... the director running ARB caught me to ask who had arraigned for me to make the presentation (he wasn't planning on rewarding them).
slightly related x-over
https://www.garlic.com/~lynn/2009s.html#16 Why Coder Pay Isn't Proportional To Productivity
https://www.garlic.com/~lynn/2009s.html#24 Why Coder Pay Isn't Proportional To Productivity
https://www.garlic.com/~lynn/2009s.html#28 Why Coder Pay Isn't Proportional To Productivity
https://www.garlic.com/~lynn/2009s.html#41 Why Coder Pay Isn't Proportional To Productivity
misc. other recent posts mentioning above presentation:
https://www.garlic.com/~lynn/2009e.html#4 Cost of CPU Time
https://www.garlic.com/~lynn/2009j.html#60 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009k.html#70 An inComplete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2009l.html#66 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009r.html#0 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009r.html#21 Small Server Mob Advantage
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Wed, 30 Dec 2009 09:35:17 -0500Peter Flass <Peter_Flass@Yahoo.com> writes:
i never did followup ... but during the presentation ... there was about 40 people in the room ... about half were younger, appeared eager and more interested in what I was talking about than what they were having to work on ... the other half were older, appeared to be much less technically oriented and didn't appear happy about the response from the youngsters.
I was talking about working features that the youngsters possibly only barely dreamed about ever existing ... and would have been nearly impossible to do based on their existing NCP/PU4 implementation (would need more powerful processor, and a layered infrastructure with more internal feature/function).
During the presentation, I mentioned that more than a decade earlier ... science center reviewed "peachtree" (unannounced processor for series/1) and pointed out that it was significantly better processor for use in 3705 (than the one they chose ... which was going to represent a feature/function inhibitor) ... however, while peachtree simplified doing a lot of things (that 3705/3725 found nearly impossible) ... by that point, even "peachtree" had been pushed just about to its limits and something like move to 801/rios was needed to enable further enhancements.
and as been referred to ... having an organization less than 1/10th the size of NCP/pu4 group helped significantly.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Wed, 30 Dec 2009 09:42:12 -0500re:
I also mentioned that nearly two decades previously ... as undergraduate and doing lots of modifications to cp67 at the univ. ... I had to add TTY support to cp67. In that process, i tried to make the (then) mainframe terminal controller ... do something that it couldn't quite do. this helped motivate the univ. to start a "clone" controller effort, reverse engineer the mainframe channel, and build a channel interface board for Interdata/3 ... programmed to emulate the 2702 (but with added feature/functio).
Later, four of us got written up as starting clone controller
business. A descendent of the box under Perkin/Elmer name (after PE
bought Interdata) was still being sold. misc. past posts mentioning
clone controller
https://www.garlic.com/~lynn/submain.html#360pcm
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Problem with XP scheduler? Newsgroups: microsoft.public.win32.programmer.kernel,alt.folklore.computers,comp.os.ms-windows.programmer.nt.kernel-mode Date: Wed, 30 Dec 2009 11:35:50 -0500J de Boyne Pollard <J.deBoynePollard@Tesco.NET> writes:
there was "tightly-coupled" (shared-memory) multiple processors ... there was also "loosely-coupled" (non-shared-memory) but possibly shared disk or other i/o, multiple processors (clusters) ... and then there are the "shared-nothing" clusters.
in the 90s ... sequent claimed that it did much of the "windows" scale-up work for shared-memory multiple processors (getting NT running efficiently on more than two-processor multiple processor machines.
when charlie was working on fine-grain multiprocessor locking for cp67 kernel (late 60s), he invented compare&swap instruction (name of instruction chosen because CAS are his initials). The attempt to get it included in 370 (mainframe) multiprocessor was initially rebuffed ... claiming that TEST&SET instruction was more than sufficient. The challenge was to come up with uses for compare&swap instruction that weren't multiprocessor specific. Thus was born the examples of using compare&swap in multithreaded/multiprogrammed applications ... to coordinate the different threads ... regardless of whether the underlying hardware was just a single processor or multiple processors.
It became standard use for highly optimized, multithreaded subsystems ... like various kinds of database management systems (DBMS) ... and started showing up on processors other than 370s.
description from current mainframe principles of operation (much of it the original justification used for compare&swap in early 70s):
A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320
scheduling can be somewhat orthogonal ... I had done dynamic adaptive resource management for cp67 as undergraduate in the 60s (sometimes referred to as the "fair share scheduler" because default resource management policy was "fair share").
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Problem with XP scheduler? Newsgroups: microsoft.public.win32.programmer.kernel,alt.folklore.computers,comp.os.ms-windows.programmer.nt.kernel-mode Date: Wed, 30 Dec 2009 11:57:19 -0500re:
actually, in the mid to late 80s (>20 years ago) looking at some of the unix scheduler code ... I realized that I had rewritten it nearly two decades previously (over 40 yrs ago) in cp67.
I attribute it to possibly have originated in CTSS. Some of the CTSS people went to science center on the 4th flr ... and did (virtual machine) cp40 ... which then morphed into (virtual machine) cp67. Other of the CTSS people went to Multics on the 5th flr ... and there is various folklore regarding Multics and UNIX.
In any case, when some people from the science center came out and installed cp67 at the univ in Jan68 ... I got to do lots of changes/enhancements. One was rewriting the scheduling infrastructure ... part of that original cp67 scheduling implementation (that I rewrote) bore some amount of similarity to the unix scheduling implementation that I later ran across in the mid & late 80s.
misc. past posts mentioning fair share scheduler
https://www.garlic.com/~lynn/subtopic.html#fairshare
that have been comments in the past about multi-core being a solution for poorly implemented schedulers ... as long as there are at least as many different core/processors as there are things to run ... then a scheduler is never faced with decision about what *NOT* to run (at any particular moment) ... sort of the inverse of scheduling decision about what to run ... as long as there are enough processors to run everything.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Wed, 30 Dec 2009 12:21:14 -0500jmfbahciv <jmfbahciv@aol> writes:
it has been suggested that row&column table metaphor helped (eventually) contribute to RDBMS uptake ... but may also contribute to forcing computer applications into limitations associated with such row&column structured metaphor (trouble dealing with things that don't naturally fit into such a table orientation ... table as a pile of cards ... with each card having the same structure).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Happy DEC-10 Day Newsgroups: alt.sys.pdp10,alt.folklore.computers Date: Wed, 30 Dec 2009 17:24:56 -0500despen writes:
kernel processing for svc202 then was identical regardless of where the svc202 originated from. kernel would then run svc202 command processing thru standard search ... first was it in abbreviation/synonym table, then was it EXEC file (batch commands) somewhere in search order, then was it MODULE file (binary executable) somewhere in search order, and finally was it an internal kernel system service.
it made everything callable from everywhere ... but it also made it possible to do customized processing of anything ... aka making a personal EXEC file customized front-end for some kernel system service (by giving it the same name).
in the morph from cp67 cms (cambridge monitor system) to vm370 cms (conversational monitor system) .. svc203 was added that was specifically for invoking internal kernel services ... and bypassed all the search lookup gorp. svc203 had more binary parameter list as opposed to the svc202 symbolic tokenized parameter list (which required quite a bit more decoding overhead).
following shows an assembler program making svc202 call (generated by
the HNDINT assembler macro) to kernel service to wait on interrupt from
card reader. It also shows making same kernel call implemented directly
from EXEC file.
https://www.garlic.com/~lynn/2004b.html#56 Oldest running code.
... much later Wang/VS apparently was looking to get out of the hardware business and were convinced to relogo RS/6000 (801 rios risc processors), porting Wang/VS to RS/6000 (hardware, not on top of aix). Some from austin workstation group left and joined wang.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DEC-10 SOS Editor Intra-Line Editing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Wed, 30 Dec 2009 17:58:04 -0500"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
misc. past ha/cmp posts:
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Problem with XP scheduler? Newsgroups: microsoft.public.win32.programmer.kernel,alt.folklore.computers,comp.os.ms-windows.programmer.nt.kernel-mode Date: Thu, 31 Dec 2009 10:07:25 -0500"Maxim S. Shatskih" <maxim@storagecraft.com.no.spam> writes:
Sequent had 16-way & 32-way ... and had done a lot of work on Unix to improve Unix scale-up in more than 2-way & 4-way operation (dynix).
before we left ibm in early 90s, we had done some work with SCI ... for
possibility for ha/cmp scale-up ... although at the time we were dealing
with processor chip that didn't provide for any cache consistency
... and all scale-up had to be "cluster". old reference to jan92 meeting
in ellison's conference room on ha/cmp cluster scale-up
https://www.garlic.com/~lynn/95.html#13
other posts mentioning ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
later, two of the people (at the jan92 meeting) had left and show up at
small client/server startup responsible for something called "commerce
server" (the startup had also invented something called "SSL"). we were
brought in because they wanted to payment transactions on their server
... and the result is now frequently called "electronic commerce". Part
of the "electronic commerce" work was something called a "payment
gateway" ... which acted as interface between webservers on the internet
and financial networks for payment transactions (we periodically refer
to it as original SOA). The initial "payment gateway" was an HA/CMP
configuration with several other boxes around the edges providing
various kinds of integrity, diagnostic, and security functions.
misc. past posts mentioning "electronic commerce" and "payment gateway"
work:
https://www.garlic.com/~lynn/subnetwork.html#gateway
the small client/server startup had also a growing presence on the internet ... with servers deliverying their client & server products. They were using (unix) servers from companies in the silicon valley area ... that were quickly overloaded and required installation of more & more servers ... with their own unique host name ... and internet customers were asked to selective specify different URL host names when connecting (trying to spread out the internet load, this was before some of the front-end router work that was done at google). then they brought in a large sequent box and the problems went away (not just large sequent box, and dynix smp scale-up work ... but dynix also had some amount of tcp/ip protocol scale-up work)
At the time NT had some SMP support ... but running on 8-way didn't show any increased thruput than running on 4-way (and little thruput improvement on 4-way compared to 2-way). Somehow sequent was involved to get NT running on their 32-way box ... and do a lot of scale-up work to show increasing thruput as configurations scaled passed 4-way (aka more than four processors) in SMP configuration.
Later in the 90s, sequent was doing a 256-way smp SCI-based machine (NUMA-Q) and doing further work on their (dynix) unix to scale-up to 256-way (although I don't know of any work on NT for 256-way). In that timeframe, Steve Chen (from cray & chen supercomputers) was CTO at sequent ... and we did some consulting for him.
Now while sequent did a lot of work on NT kernel to show increasing thruput as processors increased past two ... (at the time) NT thruput still didn't match Unix products (on the same hardware). at one point, there was a joint project that redmond had for putting a large web-based service on the internet and we were brought in to do some work on it. We showed that NT still didn't have thruput necessary to support the fully deployed operation ... and the redmond group decided that I would be the person to explain to their CEO why a UNIX platform would have to be used for the deployment. Before that actually took place, the executive running the group ... decided that (instead) the web service would have a staged roll-out ... the web population supported would never exceed the scale-up capability of NT (and therefor it would not be necessary to use a UNIX platform ... and I wouldn't have to explain to their CEO why NT wasn't being used).
now there are two somewhat related issues (but not identical) ... whether NT showed increasing (SMP) thruput as the number of processors increased ... and NT thruput compared to other SMP implementations on the same hardware.
sequent wiki page:
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
23May95 article:
Sequent Unveils New High-End Systems for Windows NT; Based on
Industry-Leading Platform Symmetry...
http://www.allbusiness.com/technology/software-services-applications-computer/7126055-1.html
from above:
The WinServer systems add higher-end Windows NT-based performance to the
existing integrated solutions Sequent provides to support customers'
business requirements in decision support, online transaction processing
and messaging. WinServer systems offer customers the benefits of proven
Symmetry hardware, the industry's most mature and technologically
advanced SMP platform, which has been installed with the UNIX operating
system at thousands of Sequent customer sites around the world.
... snip ...
9Nov92 article:
JUST WHEN SEQUENT THOUGHT IT WAS SAFE...
http://www.businessweek.com/bwdaily/dnflash/content/jan2009/db20090129_707519.htm?chan=top+news_top+news+index+-+temp_top+story
from above:
Microsoft Corp. has picked Sequent multiprocessing technology for
Windows NT, the advanced operating system software it is readying for
1993.
... snip ...
steve chen wiki page:
https://en.wikipedia.org/wiki/Steve_Chen_%28computer_engineer%29
note the reference to SCI in the above ... is different than
the SCI technology used by Convex, SGI, Sequent, DG & others
for scalable shared memory multiprocessors ... wiki page:
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
above makes mention that the standardization effort eventually morphed into current InfiniBand ("which is similar to SCI in many ways")
infiniband wiki:
https://en.wikipedia.org/wiki/InfiniBand
for some topic drift ... only marginally related posts (security,
not SMP):
https://www.garlic.com/~lynn/2009l.html#20 Cyber attackers empty business accounts in minutes
other earlier posts mentioning the above:
https://www.garlic.com/~lynn/2009.html#60 The 25 Most Dangerous Programming Errors
https://www.garlic.com/~lynn/2009g.html#18 Top 10 Cybersecurity Threats for 2009, will they cause creation of highly-secure Corporate-wide Intranets?
https://www.garlic.com/~lynn/2009h.html#28 Computer virus strikes US Marshals, FBI affected
https://www.garlic.com/~lynn/2009i.html#22 My Vintage Dream PC
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 programs on a z/10 Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 31 Dec 2009 10:38:59 -0500Tony Harding <tohard@universalexports.bogus.net> writes:
now Amdahl claims that he never knew about future system effort ... he
left because they weren't going to build his advanced 360 computers.
however, he gave a talk in (large) MIT auditorium in the 70s. One of the
questions from the audience was how did he convince people to invest in
his clone processors. His reply was ... that even if IBM were to totally
walk away from 370 ... customers had already something like $200B
invested in 360/370 software ... which would keep him in business
through the end of the century (aka might be considered a veiled
reference to future system effort that was going on at the time). some
of that is made in this reference (including copy of old memo doing some
analysis of FS):
http://www.jfsowa.com/computer/memo125.htm
from above:
Of course, IBM could have delivered a machine with similar or better
performance in 1975 instead of 1977, if they hadn't killed all the
System/370 design projects to avoid competition with the FS fantasy.
... snip ...
the distraction/fantsy of FS ... allowed 370 product pipelines to go dry ... which has been used to explain how clone processors gained such a market foothold.
another reference with some reference to FS:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in a
partial narrowing of the price gap between IBM and its rivals.
... snip ...
this has some reference to FS discussion from Morris & Fergus book:
https://www.garlic.com/~lynn/2001f.html#33
partial quote from Morris & Fergus comments
Basically they say that so much energy went into FS that s370 was
neglected, hence Japanese plug-compatibles got a good foothold in the
market; after FS's collapse a tribe of technical folks left IBM or when
into corporate seclusion; and perhaps most damaging, the old culture
under Watson Snr and Jr of free and vigorous debate was replaced with
sycophancy and make no waves under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat (by the FS failure)
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM 9393 RVAs "Obsolete" for Sure Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 31 Dec 2009 11:20:30 -0500m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
a possible caveat was VM "shadow" tables ... were whatever the virtual machine was using.
one of the things that Endicott tried to do with ECPS on 370 138/148 was to turn it into a vm370 "only" machine ... i.e. vm370 would be shipped as native part of the hardware ... somewhat like LPAR is today. With ECPS on 148 ... there was situations were VS1 actually run faster on vm370 on 148 ... than w/o vm370. The issue was that 2k pages had advantage in very small real storage ... but by 138/148 ... real storage sizes had significantly increased. VS1 under vm370 ... with VS1 "handshaking" ... VS1 created a 16mbyte virtual address space tables (using 2kbyte pages) that ran in a 16mbyte virtual machine. The ersult was that VS1 never had requirement to page ... and vm370 did all the paging in 4k pages (instead of VS1 doing it in 2k pages). With handshaking, VS1 could do a task-switch while vm370 was handling a page fault for VS1 virtual machine (and vm370 4k page i/o handling was significantly more efficient than VS1 2k page i/o handling).
in any case, for various reason ... corporate hdqtrs overruled endicott shipping 138/148 as vm370 machines (with vm370 installed before it left the plant).
now there was some customer problems for vm370/vs1 customers moving from 168-1 to 168-3. big speedup for 168-3 was doubling the cache size. to do this ... they used the "2k" address bit for cache line indexing. To avoid duplicates ... when running with 2k page tables ... the 168-3 dropped backed to only using half the cache. whenever the 168-3 switched between 2k page mode and 4k page mode ... it also did a complete cache flush. Now, even tho vm370 did all paging in 4k mode ... when the vs1 virtual machine was running ... it used 2k page "shadow" tables (that emulated the vs1 page tables) ... but would switch to 4k page mode (and the virtual machine tables) whenever the vm370 kernel was entered.
The result was heavy vm370/vs1 customers actually saw performance degradation when they moved from 168-1 to 168-3 (since the double sized cache was never used when vs1 was running and there was a lot of additional hardware overhead constantly flushing the cache switching back & forth between vs1 virtual machine and the vm370 kernel).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Problem with XP scheduler? Newsgroups: microsoft.public.win32.programmer.kernel,alt.folklore.computers,comp.os.ms-windows.programmer.nt.kernel-mode Date: Thu, 31 Dec 2009 13:54:51 -0500re:
the folklore in a.f.c. is that NT starts out as VMS by some people hired
from DEC. VMS had specialized in some amount of commercial
dataprocessing ... but didn't particularly have very long SMP support
heritage. post with old email about vax/vms SMP product announcements
https://www.garlic.com/~lynn/2007.html#email880324
https://www.garlic.com/~lynn/2007.html#email880329
in this post
https://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old days?
then windows continues as desktop platform and NT becomes the (somewhat compatible) server platform. then when the two platforms were consolidated ... more was taken from the desktop platform than the server platform ... possibly implying that some amount of the SMP work was dropped(?).
this possibly accounts for the stories in the press about intel having
to explain to the CEO in redmond why single processor chips couldn't
just continue to get faster ... and why there was the move to multi-core
(multiprocessor) chips ... AND why windows (& desktop applications)
would have to significantly improve its SMP support ... referenced in
this article:
http://www.theregister.co.uk/2007/05/01/mundie_mundie/
mentioned in this post
https://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies
note ... a.f.c. tends to have some amount of topic drift ... when I
was at SJR, Backus' office was just a couple doors down the corridor.
In any case, above post also has references to old email regarding
boca/os2 group asking me about "scheduling":
https://www.garlic.com/~lynn/2007i.html#email871204
https://www.garlic.com/~lynn/2007i.html#email871204b
in this post
https://www.garlic.com/~lynn/2007i.html#60 John W. Backus, 82, Fortran developer, dies
In any case, some of the desktop platform smp/multicore was that a lot of the desktop apps were strictly single-threaded and only ran faster when the processor got faster (additional processors didn't help, multiple processors help multiple different applications to run concurrently, but didn't help with running a single-threaded application faster) ... and for these apps to show increased thruput ... they would have to be rewritten for multi-thread (&/or parallel) operation.
for other topic drift ... some past posts about working on design for
5-way SMP in the mid-70s (which got canceled before being announced):
https://www.garlic.com/~lynn/submain.html#bounce
which was almost immediately followed by working on design for 16-way
SMP ... which also got canceled before being announced ... some recent
references:
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#14 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#17 Broken hardware was Re: Broken Brancher
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
and as per previous reference ... lots of past posts mentioning SMP (and/or
compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CAPS Fantasia Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 31 Dec 2009 16:47:36 -0500zoswork@GMAIL.COM (P S) writes:
that has a lot of URL references ... why 360 became EBCDIC and not ASCII
EBCDIC and the P-BIT (The Biggest Computer Goof Ever)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other references from the same site:
HOW ASCII CAME ABOUT
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM
HOW ASCII GOT ITS BACKSLASH
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/BACSLASH.HTM
SIGNIFICANT ARTICLES ON ASCII
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/INSIDE-A.HTM
ASCII and the Mark of the Beast
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/666.HTM
ORIGIN OF THE ISO REGISTER FOR ASCII-ALTERNATE SETS
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/REGISTRY.HTM
from "EBCDIC and the P-BIT"
Who Goofed?
The culprit was T. Vincent Learson. The only thing for his defense is
that he had no idea of what he had done. It was when he was an IBM Vice
President, prior to tenure as Chairman of the Board, those lofty
positions where you believe that, if you order it done, it actually will
be done. I've mentioned this fiasco elsewhere
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970