List of Archived Posts

2006 Newsgroup Postings (12/17 - 12/23)

What's a mainframe?
IBM sues maker of Intel-based Mainframe clones
The Future of CPUs: What's After Multi-Core?
Why so little parallelism?
S0C1 with ILC 6
S0C1 with ILC 6
Multics on Vmware ?
vmshare
vmshare
Plurals and language confusion
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
IBM ATM machines
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
"The Elements of Programming Style"
"The Elements of Programming Style"
'Innovation' and other crimes
Multiple mappings
IBM sues maker of Intel-based Mainframe clones
Executing both branches in advance ?
Multiple mappings
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
"The Elements of Programming Style"
"The Elements of Programming Style"
The Future of CPUs: What's After Multi-Core?
Toyota set to lift crown from GM
NSFNET (long post warning)
Year-end computer bug could ground Shuttle
"The Elements of Programming Style"
SSL security with server certificate compromised

What's a mainframe?

Refed: **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What's a mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 17 Dec 2006 15:53:44 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
in the following, "SJ" (san jacinto) was code name for rs/6000.

re:
https://www.garlic.com/~lynn/2006v.html#35 What's a mainframe?

"San Jacinto" morphed into RIOS and RS/6000, old news item

Date: 25 August 1987, 14:39:42 EDT
To: wheeler

From this week's Management Information Systems Week...

IBM's Austin, Texas, manufacturing facility - where the RT was born - is currently putting the final touches on a 10-mips Unix-based workstation, code-named "San Jacinto," according to an industry source.

"It's a follow-on to the RT, due in the first or second quarter" said the source. The San Jacinto will be Posix-compatible, as well.


... snip ... top of post, old email index

as i've mentioned before RT originally started out with ROMP (chip) and cp.r (written in pl.8) as follow-on to the office product division displaywriter. when that project was killed, they decided to retarget it to the unix workstation market ... subcontracting for a at&t unix port with the same company that had done the pc/ix port.

misc. past romp, rios, 801, fort knox, etc posts
https://www.garlic.com/~lynn/subtopic.html#801

IBM sues maker of Intel-based Mainframe clones

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM sues maker of Intel-based Mainframe clones
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 17 Dec 2006 16:41:15 -0700
phil@ibm-main.lst (Phil Payne) writes:
Purely interpretive execution has been done before and published - I remember a book called "A Compiler Generator" in the early 1970s that contained the complete source code for emulation of /360 code on a /360 - the idea being that you could trace every instruction.

POK had something like that called "redcap". it was not only usable to trace every instruction ... but also to trace all storage references.

the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

adapted it for tracing instructions and storage references for application that did semi-automated program reorganization ... optimizing operation for virtual memory operation.

i had gotten involved in rewriting some of the redcap interfaces to improve the operation/performance for use in the science center application.

the science center application was used quite a bit internally by a number of product developers .... for instance the IMS group in STL made extensive use of it for analysing IMS execution.

eventually it was released as a product called Vs/Repack in the spring of 76.

systems journal article describing some of the early work:
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

misc. past posts mentioning redcap, program restructuring, vs/repack
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006j.html#18 virtual memory
https://www.garlic.com/~lynn/2006j.html#22 virtual memory
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
https://www.garlic.com/~lynn/2006l.html#11 virtual memory
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance
https://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Mon, 18 Dec 2006 10:30:54 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
reports, VM/370 Modifications (RJ2906) and VM/370 Shared Modules (RJ2928). These reports describe several associated paging system enhancements and give further detail concerning the N.5 page replacement algorithm

re:
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?

for some another reference to the above
https://www.garlic.com/~lynn/2006t.html#13 VM SPOOL question

has old email from 92 about run in with somebody had done some consulting work at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

in the early 70s ... and more recently had designed the Amdahl Huron database system and was working on the implementation. Part of the discussion was that he was also involved at the time in co-author of paper on LRU replacement algorithms ... in addition to replacement algorithms as they applied to DBMS managing buffer caches.

for additional topic drift ... other recent posts about database buffer caching
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006o.html#22 Cache-Size vs Performance
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006w.html#27 Generalised approach to storing address details

Why so little parallelism?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why so little parallelism?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Dec 2006 11:32:12 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
I know it's sitting at the Museum.

attached email was not long before we got told that the project was being transferred and we weren't suppose to work on anything with more than four processors.

it turned out that there was major product announcement later, but it was by kingston, not us, and we never did do any scale-up announcements. if you have anything in the computer museum, it didn't come from us.

previous cluster-in-a-rack/MEDUSA refs:
https://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
https://www.garlic.com/~lynn/2006w.html#26 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#38 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#39 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?

as several previous references, here is old references to the meeting at oracle
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15

Date: 29 January 1992, 16:28:48 PST
From: wheeler
Subject: "cluster computing"

I believe we got a charter last week to do generalized "cluster computing" with both horizontal growth and availability. We are going full steam ahead now with plans for major product announce directions in 6-8 weeks.

Hester gave Larry Ellison (pres. of Oracle) some general "technology directions".

I'm now in the middle with nailing down overall ... total computing system environment ... for that year end time frame (database, horizontal growth, availability, connectivity, function, fileserving, applications, system management, enterprise wide services, etc, etc).

I wasn't able to make the LLNL meeting tues & weds. this week ... but XXXXX and YYYYY came by this afternoon (after the meeting).

YYYYY had already put together pictures of the visionary direction (i.e. for LLNL national storage center) titled "DATAstore" with NSC providing a generalized fabric switch/router with lots of things connected to it ... both directly & fully-meshed and in price/performance hierarchy ... that had HA/6000 as the controlling central "brains". He effectively said I can get a generalized NSC switch/router built off combining current NSC/DX technology (including the RISC/6000 SLA interface) and their HiPPI switch by 2nd qtr. By ye he should have for me a generalized switch fabric (called UNIswitch) that has variety of "port" boards

• Sonet,
• FDDI,
• ESCON,
• FCS,
• serial HiPPI,
• parallel HiPPI,
• NSC/DX

In theory, anything coming in any port ... can come out any other port.

Also, YYYYY has built into the "switch fabric" a "security" cross-matrix function that can limit who can talk to who (i.e. otherwise the default fabric is fully-meshed environment, everybody can talk to everybody). I can use this for the HA "I/O fencing" function ... which is absolutely necessary for going greater than two-way.

XXXXX brought up the fact that we have a larger "scope" here and that immediately there are about a dozen large "hot Unitree" activities going on at the moment and that (at least) we three will have to coordinate. One of them is the current LLNL physical data repository technical testbed ... but there are two other production environments at LLNL that have to be addressed in parallel with this work ... and there are another 9 or so locations that we also have to address.

In addition, both NSC and DISCOS have been having some fairly close dealings with Cornell ... both Cornell proper and also with regard to the bid on the NSF stuff. Also the Grummen SI stuff for Nasa/Huntsville came up.

ZZZZZ was also in town visiting Almaden about some multi-media stuff ... and I invited him to sit in on the meeting with YYYYY and XXXXX. That gave us the opportunity to discuss a whole other series of opportunities (like at Cargil(sp?)). The tie-in at Discos is interesting since General Atomics also operates the UCSD supercomputing center ... and at least two of the papers at last fall SOSP on multi-media filesystem requirements were from UCSD (XXXXX knows the people doing the work).

Also in the discussions with XXXXX about Unitree development we covered various things that WWWWW (LLNL) had brought up in the past couple days (off line) and the Cummings Group stuff (NQS-exec, network caching, log-structured filesystem, etc). XXXXX wants to have two 3-way meetings now ... one between WWWWW, XXXXX and me ... in addition to the 3-way (or possibly 4-way) meeting between Cummings, XXXXX, and me.

This is all the visionary stuff that we sort of ran thru for the total computing environment that we would like to have put together for next year (hardware, software, distributed, networking, system management, commercial, technical, filesystems, information management). Effectively YYYYY, XXXXX, and I came out of the meeting with ground-work platform for both hardware & software to take over the whole worlds' computing environment. Little grandiose, but we will be chipping away at it in nice manageable business justified "chunks/deliverables".

This is consistent with an overall theme and a series of whitepapers that we have an outside consultant working on (was one of the founders of Infoworld and excellent "tech writer") ... talking about the computing vision associated with "cluster computing" (which includes the MEDUSA stuff ... and HA/MEDUSA being base for HA/6000 scale-up).


... snip ... top of post, old email index

as mentioned ... within a few days of sending the above email, the whole project was taken away from us and transferred to another organization and we were told we couldn't work on anything with more than four processors. and then within a couple weeks
2001n.html#6000clusters1
... scientific and technical only
and then a little later in the year
2001n.html#6000clusters2
... caught by surprise

other old MEDUSA related email
https://www.garlic.com/~lynn/lhwemail.html#medusa

and of course, we were producing a product, i.e ha/cmp ... misc. past posts mentioning
https://www.garlic.com/~lynn/subtopic.html#hacmp

the reference to enterprise wide services was part of our 3-tier architecture ... misc. recent postings
https://www.garlic.com/~lynn/2006u.html#55 What's a mainframe?
https://www.garlic.com/~lynn/2006v.html#10 What's a mainframe?
https://www.garlic.com/~lynn/2006v.html#14 In Search of Stupidity
https://www.garlic.com/~lynn/2006v.html#35 What's a mainframe?

and past collected postings mentioning 3-tier
https://www.garlic.com/~lynn/subnetwork.html#3tier

for other relational drift and scale-up
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity

and couple other old rdbms/oracle references:
https://www.garlic.com/~lynn/2004o.html#40 Facilities "owned" by MVS
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?

and, of course, misc. and sundry posts about system/r
https://www.garlic.com/~lynn/submain.html#systemr

part of ha/cmp scale-up was work on distributed lock manager ... misc past posts:
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000g.html#32 Multitasking and resource sharing
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#71 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#17 Blade architectures
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#2 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#17 Dealing with complexity
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005.html#55 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
https://www.garlic.com/~lynn/2005f.html#32 the relational model of data objects *and* program objects
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2005h.html#28 Crash detection by OS
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005q.html#49 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#23 OS's with loadable filesystem support?
https://www.garlic.com/~lynn/2005u.html#38 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006j.html#20 virtual memory
https://www.garlic.com/~lynn/2006o.html#24 computational model of transactions
https://www.garlic.com/~lynn/2006o.html#32 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006o.html#33 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R

S0C1 with ILC 6

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: S0C1 with ILC 6
Newsgroups: bit.listserv.ibm-main
Date: Mon, 18 Dec 2006 12:24:57 -0700
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
It can if it straddles page boundaries and the second page is marked invalid. That's not something that you should see in an application program.

aka, instruction fetch can page fault ... independent of the instruction execution page fault.

one of the reasons that 360/67 had an 8-entry associative array (dlat, tlb, etc) ... was the worse case for "EXECUTE" of SS instructions required eight different page addresses.

"EX" (execute) of another instruction

2 pages - instruction start and end (crossing page boundary)

target (SS) instruction

2 pages - instruction start and end (crossing page boundary)
2 pages - operand1 start and end (crossing page boundary)
2 pages - operand2 start and end (crossing page boundary)

------

8 pages

=============

load address instruction fetch could have two page faults (when instruction crosses page boundary).

S0C1 with ILC 6

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: S0C1 with ILC 6
Newsgroups: bit.listserv.ibm-main
Date: Mon, 18 Dec 2006 14:01:06 -0700
Edward Jaffe wrote:
Untrue! There are no Program Exceptions whatsoever listed for the LA instruction in PoOp. In fact, it specifically states:

"No storage references for operands take place, and the address is not inspected for access exceptions."


you can get instruction fetch program exception ... separate from instruction execution program exceptions ... recent post
https://www.garlic.com/~lynn/2006x.html#4 S0C1 with ILC 6

the note in the PoP refers to instruction execution program exceptions (storage references) ... as opposed to possible instruction fetch program exception.

repeat from previous post/reference:

aka, instruction fetch can page fault ... independent of the instruction execution page fault.

one of the reasons that 360/67 had an 8-entry associative array (dlat, tlb, etc) ... was the worse case for "EXECUTE" of SS instructions required eight different page addresses.

"EX" (execute) of another instruction

2 pages - instruction start and end (crossing page boundary)

target (SS) instruction

2 pages - instruction start and end (crossing page boundary)
2 pages - operand1 start and end (crossing page boundary)
2 pages - operand2 start and end (crossing page boundary)

------

8 pages

=============

load address instruction fetch could have two page faults (when instruction crosses page boundary).

Multics on Vmware ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multics on Vmware ?
Newsgroups: alt.os.multics
Date: Tue, 19 Dec 2006 07:09:24 -0700
Renaissance <mapsons.gianl@libero.uk> writes:
1. This is basically a Virtual Machine running ubuntu linux with a
Hercules freeware Emulator built onto it.
^^^^^^^^^^^^^^^^^^^^^^^^^^


many of the referernces to virtual appliance is more along the lines of server virtual machine ... virtual machine providing specific services ... recent reference to 1981 email for using (then) existing service virtual machine for network-based real-time public key server ... something like pgp key server ...
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network

misc. other recent posts about service virtual machine and/or virtual appliance
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC
https://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
https://www.garlic.com/~lynn/2006v.html#22 vmshare
https://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history)
https://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC
https://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones

vmshare

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: vmshare
Newsgroups: alt.folklore.computers
Date: Tue, 19 Dec 2006 08:17:38 -0700
ref:
https://www.garlic.com/~lynn/2006w.html#42 vmshare

a little more topic drift ... from the bureau of misinformation
Date: 03/02/87 13:42:13
To: wheeler

Re: VM and executives -It came as a surprise in my meeting with <a top corporate executive> that Profs ran on VM. He had been led to believe it was a VTAM application and that was why vm networking had to be linked with VTAM.


... snip ... top of post, old email index, NSFNET email

the above wasn't an isolated incident, i'd heard other similar reports. in this period, there was an enormous amount of misinformation being pushed up to corporate executives in an attempt to get a corporate directive to repopulate the internal network with dumb terminal communication operation ... as well as claims that it could be used for NSF:
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET

the internal network had nothing to do with dumb terminal communication operation ... misc. past posts mentioning the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

my wife ran into conflict with some of this same group when she served her stint in pok in charge of loosely coupled architecture (mainframe for "cluster"). there was eventually an uneasy truce where everything that crossed glass house boundary supposedly had to be under control of the strategic dumb terminal communication operation (even that truce they would chip away at). misc. past posts making reference to her stint in pok in charge of loosely coupled architecture
https://www.garlic.com/~lynn/submain.html#shareddata

we also ran into conflict a little later when we were doing 3-tier architecture ... and taking a lot of heat from the SAA crowd
https://www.garlic.com/~lynn/subnetwork.html#3tier

and some other drift ... references about presentation that claimed that the same organization was going to be responsible for the demise of the disk division (the dumb terminal communication operation was increasingly isolating the glass house from the emerging online, networking world)
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2004f.html#39 Who said "The Mainframe is dead"?
https://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005j.html#33 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005t.html#30 AMD to leave x86 behind?
https://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives

vmshare

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: vmshare
Newsgroups: alt.folklore.computers
Date: Wed, 20 Dec 2006 00:42:10 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
and some other drift ... references about presentation that claimed that the same organization was going to be responsible for the demise of the disk division (the dumb terminal communication operation was increasingly isolating the glass house from the emerging online, networking world)

re:
https://www.garlic.com/~lynn/2006x.html#7 vmshare

the person that had given the presentation predicting the demise of the disk division had much earlier worked on some large customer accounts and had written about some of his experiences.

some of his writings from long ago and far away ....


The Large Conglomerate

In 1975, a large international conglomerate customer accepted the idea
that it was possible to use IBM 3600 banking systems in a
manufacturing shop floor environment.  As a result, the IBM team and
customer installed what was to be the second MVS SNA system in the
world.  The system (hardware and software) was installed by four
people and was on-line in 10 weeks.  While the effort required 300 to
400 hours overtime by each of the four people, the numerous problems
that were experienced were generally regarded as unique and isolated
situations.  Based on post-installation experiences, the customer and
the IBM team no longer hold that belief; the change in attitude
gradually occurred as various situations developed.

After the above system had been installed for several months, the 3600
system was enhanced to support dial lines as well as leased lines.
This announcement was particularly attractive to the customer since it
had two remote 3600 systems that each required 1000 mile leased lines
which were only used for 30 minutes (maximum) a day.  After
investigation, it was determined that the customer would have to
change the level of microcode in the 3600 controller to obtain the new
function.

This required the customer to

• reassemble his 3600 application programs (APBs)
• reassemble his 3600 SYSGENS (CPGENs)
• install and use the new microcode
• use a new level of 3600 starter diskette.

However, the new level of microcode required a new level of Subsystem
Support Services (SSS) and Program Validation Services (PVS).

• The new level of SSS required a new level of VTAM.
• The new level of VTAM required a new level of NCP
• reassembly of the customer written VTAM programs.

I do not recall if the new level of VTAM also required a new level of
MVS and hence IMS, as the inquiry into this announcement had halted by
this point.  The message was clear: the change from a leased line to a
dial line would have required that virtually every line of IBM system
code in the entire complex be reinstalled.  The change was viewed as a
very small configuration change by the customer but resulted in a very
large system change.  Since this was a corporate data center and
served multiple user divisions in addition to the particular division
using the 3600 system, the impact of the change was totally
unacceptable.  In short, the change would have challenged two data
processing maxims.

• First, any given change can and often does impact service
(availability) levels of seemingly unrelated components in a data
processing system.  The impact is generally unpredictable and usually
undesirable.  (For example, the customer written VTAM 3600 support
program ran for two years without a problem until VSPC was added to
the system.  Suddenly, the customer's program randomly failed when run
during the day but not when run at night.  Later it was discovered
that VSPC was slowing down VTAM enough to cause the application's VTAM
allowable buffer count occasionally to be exceeded.  Hence, VTAM
destroyed the session.)

• Second, each system change should be implemented in isolation of
other changes whenever possible.

... snip ...

Another example of such intertwined software interdependencies can be seen in the contrast between JES2 "networking" and VNET/RSCS "networking".

VNET/RSCS had relatively clean separation of function ... including what could be considered something akin to gateway function in every node. I've periodically claimed that the arpanet/internet didn't get that capability until the great switchover to internetworking protocol on 1/1/83 ... and one of the reasons why the internal network was larger than the arpanet/internet from just about the beginning until possibly mid-85. misc. past posts about the internal network.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

In the JES2 networking implementation ... it somewhat reflected the intertwined dependencies of many of the communication implementations dating back to the 60s & 70s. Because of the intertwined dependencies, JES2 implementations typically had to be at the same release level to interoperate. Futhermore, it wasn't unusual for JES2 systems, at different release levels, attempting to communicate, to crash one or the other of the JES2 systems ... and even take down the associated MVS systems.

In the large world-wide internal network with hundreds (and then thousands) of different systems, it would be extremely unusual to have all systems simultaneously at the same release level. However such a feature was frequently required by many traditional communication implementations of the period. There are even historical arpanet BBN notes about regularly scheduled system-wide IMP downtime for support and maintenance, aka periodic complete arpanet outage ... minor ref
https://www.garlic.com/~lynn/2006k.html#10 Arpa address

for slight drift, projection that there might be as many as 100 arpanet nodes by (sometime in) 1983 (from 1980 arpanet newsletter):
https://www.garlic.com/~lynn/2006k.html#40 Arpa address

Over time, a collection of "JES" (nji/nje) line drivers evolved for VNET/RSCS ... that simulated JES protocol and allowed JES/MVS machines to participate in the internal network. There tended to be unique (VNET) JES drivers specific to each JES/MVS release ... specific driver would be started on a communication line for whatever JES/MVS that was at the other end of the line. Furthermore, over time, VNET/RSCS evolved sort of a canonical representation of JES communication ... and provided format conversion interoperability between different JES systems (as a countermeasure to JES systems at different release levels causing each other to crash, even bringing down the whole MVS system).

There is the relatively well known story about a San Jose plant site JES2/MVS system attempting to communicate with a JES2/MVS system in Hursley and causing the Hursly MVS system to repeatedly crash. The problem appeared to then be blamed on VNET ... for allowing MVS systems to cause each other to crash.

As a result of the enormous vaguries in their implementations ... JES/MVS systems tended to be restricted to boundary nodes ... with VNET/RSCS being the internal corporate networking platform.

disclaimer ... my wife did a stint in the g'burg JES product group before doing her time in POK in charge of loosly coupled architecture
https://www.garlic.com/~lynn/submain.html#shareddata

misc. past posts mentioning JES2 and/or HASP (not just networking):
https://www.garlic.com/~lynn/submain.html#hasp

for a little drift ... recent posts mentioning service virtual machines and/or virtual appliances:
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC
https://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
https://www.garlic.com/~lynn/2006v.html#22 vmshare
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network
https://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history)
https://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC
https://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006x.html#6 Multics on Vmware ?

Plurals and language confusion

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Plurals and language confusion
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 20 Dec 2006 09:22:17 -0700
giltjr@EARTHLINK.NET (John S. Giltner, Jr.) writes:
Never noticed that, but I only use ATM's in 7-11, gas stations, and places that like. I would assume that I can't make a depost at one of these.

Last time I used an actual bank ATM was a long time ago. I would assume that you can only do this at banks (includes banks inside grocery stores), credit unions, or other finical institutions.


for some ATM drift back to computer related .. a couple posts mentioning work on 2984 ATM machine at los gatos lab ... and mention of other ATM machine related itmes:
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002m.html#45 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004p.html#25 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#26 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2006q.html#5 Materiel and graft
https://www.garlic.com/~lynn/2006u.html#40 New attacks on the financial PIN processing

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: re: The Future of CPUs: What's After Multi-Core?
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Wed, 20 Dec 2006 12:37:51 -0700
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
IME the IBM VM guys had very good ideas for interaction using the corporate products and facilities, even though it has never been funded adequately and often nearly terminated.

They were much better than the batch guys at letting the users fully use all the machines' capabilities, providing nearly 100% capacity and keeping terminal response averages close to 0.1s; the batch guys were better at using up all the machines' capabilities, and the users considered themselves lucky to have 70% capacity available and get 1s terminal response times.

The really cool thing about VM systems is that you can do anything with the software under timesharing: develop a new OS, test a changed OS, trace the execution of an OS.

Once found a bug crashing a DB product only after tracing about a million instructions, a few times over to get it exactly right, with very selective output, sufficient to pinpoint the faulty code: try doing that on a real front panel or console!


for some total drift ... a different reference to "tracing" in support of semi-automated program reorganization to optimize execution for virtual memory environment
https://www.garlic.com/~lynn/2006x.html#1 IBM sues make of Intel-based Mainframe clones

as an undergraduate in the 60s, i had done dynamic adaptive resource management ... it was sometimes referred to as fair share scheduling since the default resource management policy was fair share. this was shipped as part cp67 for 360/67.

in the morph from cp67 to vm370 ... much of it was dropped. charlie's cp67 multiprocessor support also didn't make it into vm370.

i had done a lot of pathlength optimization and fastpath stuff for cp67 which was also dropped in the morph to vm370 ... i helped put a small amount of that back into vm370 release1 plc9 ... a couple past posts mentioning some of the cp67 pathlength stuff
https://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

i then got to work on porting a bunch of stuff that i had done for cp67 to vm370 ... some recent posts (includes old email from the early and mid 70s)
https://www.garlic.com/~lynn/2006v.html#36 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006w.html#7 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006w.html#10 long ago and far away, vm370 from early/mid 70s

and of course mentioned in the above referenced email ... a small amount of the virtual memory management stuff showed up in vm370 release 3 as DCSS.

there was eventually a decision to release some amount of the features as the vm370 resource manager. some collected posts on scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
and other posts on page management
https://www.garlic.com/~lynn/subtopic.html#wsclock
and for something really different old communication (from 1982) about work i had done as undergraduate in the 60s (also in this thread):
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's after Multi-Core?

in any case, some resource manager issues/features

by continually doing real time dynamical monitoring and adjusting operations, I was able to operate at much higher resource utilization and still provide decent level of service. prior to resource manager ship, somebody from corporate stated that the current state of the art for resource managers were large number of static tuning parameters and that the resource manager couldn't be considered really advanced unless it had some number of static tuning parameters (installation system tuning expert would look at daily, weekly and monthly activity ... and would select some set of static tuning values that seemed to be suited to that installation). it did absolutely no good explaining that real-time dynamic monitoring and adapting was much more advanced that static tuning parameters. so, in order to get final corporate release approval ... i had to implement some number of static tuning parameters. I fully documented the implementation and formulas and the source code was readily available. Nobody seemed to realize that it was a joke ... somewhat from "operations research" ... it had to do with "degrees of freedom" ... aka the static tuning parameters had much less degrees of freedom than the dynamic adaptive features.

i had always thot that real-time dynamic adaptive control was preferable to static parameters ... but it took another couple decades for a lot of the rest of the operating systems to catch up. it is now fairly evident ... even showing up in all sorts of embedded processors for real-time control and optimization. for some slight boyd dynamic adaptive drift
https://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive
and collected posts mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
and various URLs from around the web mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

there was transition in the mid-70s with respect to charging for software. the 23jun69 unbundling announcement had introduced charging for application software (somewhat because of gov. litigation). however the excuse was that kernel software should still be free since it was required for operation of the hardware. however, with the advent of clone processors by the mid-70s, the opinion was starting to shift, and the resource manager got chosen to be the guinea pig for kernel software charging. as a result, i got to spend some amount of time with business and legal people on kernel software charging.
https://www.garlic.com/~lynn/submain.html#unbundle

some amount of the code in the resource manager had been originally built for multiprocessor operation .... it was then added to base vm370 system that didn't have support for multiprocessor hardware operation. the next release of vm370 did introduce support for multiprocessor hardware operation ... some amount based on the VAMPS work ... misc. past posts mentioning VAMPS multiprocessor work:
https://www.garlic.com/~lynn/submain.html#bounce

the problem was that the multiprocessor support was going to be part of the (still) free, base kernel (aka hardware support ) ... while much of the multiprocessor kernel code structure was part of the "priced" resource manager (and pricing policy said that there couldn't be free software that had priced software prerequisite). so before the multiprocessor support was shipped, a lot of the resource manager code base was re-organized into the free kernel. lots of past posts mentioning multiprocessor support (and/or charlie's invention of compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp

....

previous posts in this thread:
https://www.garlic.com/~lynn/2006t.html#27 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#32 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#34 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#42 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#43 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#49 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#50 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#0 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#6 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#8 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#9 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#10 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006v.html#21 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006v.html#43 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#2 The Future of CPUs: What's After Multi-Core?

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Wed, 20 Dec 2006 13:22:27 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
Well most people had not idea their channel I/O rates were 4x IBM's. And that's not including striping which most firms still don't do. Most serious Cray programs stayed in memory. They might output a checkpoint file every now and again to restart should a hiccup occur.

references to previous posts in this thread:
https://www.garlic.com/~lynn/2006x.html#10 The Future of CPUs: What's After Multi-Core?

... i.e. HSC ... LANL doing a standards flavor of cray channel ... morped into HiPPI ... 800mbit/sec.

recent past posts mentioning HSC or HiPPI
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006m.html#52 TCP/IP and connecting z to alternate platforms
https://www.garlic.com/~lynn/2006u.html#19 Why so little parallelism?
https://www.garlic.com/~lynn/2006v.html#10 What's a mainframe?
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?

at about the same time, LLNL was pushing for fiber-optic version (fiber channel standard, FCS) of non-blocking switch technology they had installed that used serial copper (as opposed to serial fiber-optics) .... and SLAC was pushing SCI (another serial fiber-optic standard) ... which had a mapping for SCSI commands.

In that time-frame, POK was trying to get around to releasing escon ... 200mbit fiber-optic emulating standard mainframe half-duplex parallel channel. This had been knocking around in POK since the 70s ... never quite making it out. While POK was trying to make a decision about releasing escon, one of the Austin engineers took the base escon technology and made a few changes ... up'ed the raw bandwidth to 220mbits/sec, enhanced it to full-duplex operation, and redid the optical drivers (well under 1/10th the cost of the ones being used for escon). This was released as "SLA" (serial link adapter) for rs/6000.

we were doing some stuff with LLNL (on FCS), LANL (on HiPPI) and SLAC (on SCI) about the time the engineer had finished the SLA work and wanted to start on a 800mbit version. It took almost six months to talk him into abandoning SLA and going to work on FCS (where he became the secretary/owner of the FCS standards document).

later he was one of the primary people putting together much of the details for MEDUSA (cluster-in-a-rack). misc. recent posts mentioning MEDUSA:
https://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?

if you go back to 370s ... there were some 1.5mbyte/sec channels (primarily driven at that rate by fixed head disks). 3380 disks and 3880 disk controllers increased that to 3mbyte/sec operation (and the newer generation of 3mbyte/sec channels on various processors seen in the 80s). the earlier generation of channels did a protocol handshake on every byte transferred. the new 3mbyte/sec "data streaming" channels relaxed that requirement ... which both increased the data rate ... but also doubled the maximum channel distance from 200ft to 400ft i.e. you could have a machine room configuration with controllers out at a 400ft radius rather than just a 200ft radius. The 200ft limitation had gotten so severe for some installations that there were starting to appear multi-floor configurations (i.e. instead of a 200ft radius circle limitation ... a 200ft radius sphere).

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 20 Dec 2006 15:54:45 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
if you go back to 370s ... there were some 1.5mbyte/sec channels (primarily driven at that rate by fixed head disks). 3380 disks and 3880 disk controllers increased that to 3mbyte/sec operation (and the newer generation of 3mbyte/sec channels on various processors seen in the 80s). the earlier generation of channels did a protocol handshake on every byte transferred. the new 3mbyte/sec "data streaming" channels relaxed that requirement ... which both increased the data rate ... but also doubled the maximum channel distance from 200ft to 400ft i.e. you could have a machine room configuration with controllers out at a 400ft radius rather than just a 200ft radius. The 200ft limitation had gotten so severe for some installations that there were starting to appear multi-floor configurations (i.e. instead of a 200ft radius circle limitation ... a 200ft radius sphere).

re:
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?

for other topic drift, i had done some work in the disk engineering (bldg14) and disk product test (bldg15) labs. ... misc. past posts mentioning work in bldg14 &/or bldg15
https://www.garlic.com/~lynn/subtopic.html#disk

and misc. past posts this year mentioning 3380 disks and/or 3880 disk controllers
https://www.garlic.com/~lynn/2006.html#4 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2006i.html#12 Mainframe near history (IBM 3380 and 3880 docs)
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#2 virtual memory
https://www.garlic.com/~lynn/2006j.html#3 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006j.html#14 virtual memory
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#13 virtual memory
https://www.garlic.com/~lynn/2006l.html#18 virtual memory
https://www.garlic.com/~lynn/2006m.html#5 Track capacity?
https://www.garlic.com/~lynn/2006m.html#8 Track capacity?
https://www.garlic.com/~lynn/2006m.html#13 Track capacity?
https://www.garlic.com/~lynn/2006n.html#8 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006n.html#33 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2006n.html#35 The very first text editor
https://www.garlic.com/~lynn/2006o.html#44 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006q.html#50 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#33 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#18 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006v.html#0 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006v.html#16 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#20 Ranking of non-IBM mainframe builders?

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Wed, 20 Dec 2006 20:58:10 -0700
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
Always thought IBM channels were much slower than the then current technology could easily achieve: 10x faster would still put them below current PC buses.

re:
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#12 The Future of CPUs: What's After Multi-Core?

hsc/hippi was 800mbits/sec parallel, half-duplex

escon was 200mbits ... and although dual fiber-optics, it operated as if half-duplex parallel channel. that put escon at 1/4 hsc/hippi.

mainframe genre refers to it as 17mbyte/sec channel. the rs/6000 SLA somewhat originating from same original technology ... but upgrading to 220mbits/sec (instead of 200mbits).

as mentioned, part of upgrade to 3mbyte "data streaming" also allowed increasing aggregate channel distances from 200ft to 400ft. along with the 3mbyte, "data streaming" channels there was also 3380 disks with 3mbyte/sec transfer.

some of the FCS standards activity had some number of representation from mainframe channel organization and there was lots of contention about including half-duplex mainframe channel operation as part of the FCS channel. it currently goes by the term "FICON". old posts mentioning FICON
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?

old posts comparing cp67 360/67 thruput with vm370 3081 thruput and pointing out that disk relative system thruput had declined by a factor of more than an order of magnitude over a period of 10-15 yrs ... also mentioned that disk division had taken exception and assigned the performance modeling organization to refute my statement. a couple weeks later they came back and commented that i had somewhat understated the problem:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine

IBM ATM machines

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM ATM machines
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 21 Dec 2006 10:01:27 -0700
jsavard@ecn.ab.ca wrote:
That reminds me. The first ATMs in Canada I saw at my bank were made by IBM - and they used a formed-character printer to print the information about the transaction on card stock media.

It was apparent to me that what they were using was exactly the right size to be a 96-column card.


ref (including past posts mentioning los gatos lab atm machine development):
https://www.garlic.com/~lynn/2006x.html#9 Plurals and language confusion

ditto the san jose ibm credit union

offices were in the basement of bldg.12 ... they had to move out when bldg.12 under went seismic retrofit.

and for the heck of it and more topic drift, other posts in the pin attack threads
https://www.garlic.com/~lynn/2006u.html#42 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006u.html#43 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006u.html#47 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006u.html#48 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006v.html#1 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006v.html#2 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006v.html#33 New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006v.html#39 On sci.crypt: New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006v.html#42 On sci.crypt: New attacks on the financial PIN processing
https://www.garlic.com/~lynn/2006v.html#46 Patent buster for a method that increases password security
https://www.garlic.com/~lynn/2006v.html#49 Patent buster for a method that increases password security

the original post in the above thread
https://www.garlic.com/~lynn/2006u.html#40 New attacks on the financial PIN processing

was oriented towards insiders being able to take advantage of their position to compromise PIN processing.

this is different than the skimming/harvesting attacks
https://www.garlic.com/~lynn/subintegrity.html#harvest

that can be done with compromised terminals (which may involve insiders or outsiders) ... i.e. capture of static authentication data for replay attacks. traditionally this has involved magstripe cards (with or w/o PINs) ... but has also been used against chipcards that rely on static authentication data
https://www.garlic.com/~lynn/2006v.html#45 On sci.crypt: New attacks on the financial PIN processing

in some cases ... some of the chipcard implementations using static authentication data has made it even more attractive for the attacker doing skimming exploits.

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Thu, 21 Dec 2006 11:19:16 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
Why are you always a decade off?

re:
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#12 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's After Multi-Core?

you wrote "channel i/o rates were 4x IBM's" ... the only mainframe channel that was 1/4th 800mbits was escon (200mbits) which didn't start shipping until the time-frame of hsc/hippi ... 90s.

i was only answering based on what you had written. if you feel that you got the time-frame reference wrong, it isn't my fault.

maybe i misunderstood what you were trying to say and you were really were referring to I/O operations per second ... independent of bytes transferred? or maybe you weren't referring to rates of a channel ... but actually were referring to overall number system I/O operations per second (which happened to be channel I/O operations ... again independent of the bytes tranferred). Or maybe you met something different than what you typed?

I apologize if I misunderstood you and you weren't actually referring to a cray channel byte transfer rate being four times IBMs.

if you really met to reference 1970s "Cray-1 (when this started)" ... then you would be referring to mainframe bus&tag 1.5mbyte/sec channel ... around 12mbit/sec ... or do you mean to state that the 1970s Cray-1 channel was 4*12mbit/sec ... or 48mbit/sec????.

part of the bus&tag limitation was doing protocol handshake on every byte transferred (which also imposed the aggregate 200ft limit on each channel). The only devices that actually ran at that (1.5mbyte/sec) rate were fixed-head disks ... and even then many systems required significantly reduced channel distance limitations (in order to operate at that channel transfer rate).

if the decade that you want to reference was with respect to an ibm channel that was 1/4th the 800mbits/sec ... then you are talking about escon and 90s. if you want to reference 70s when ibm channels were 1.5mbytes/sec ... then are you implying that the 1970s cray-1 channel was around 6mbytes/sec data transfer rate?

as to raid ... reference to patent awarded 1978 to somebody at the san jose plant site ... past posts referencing the patent:
https://www.garlic.com/~lynn/2002e.html#4 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2004d.html#29 cheaper low quality drives
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2006p.html#47 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?

wiki reference:
https://en.wikipedia.org/wiki/Redundant_array_of_independent_disks

misc. past posts mentioning work with bldg. 14 disk engineering and bldg. 15 disk product test labs
https://www.garlic.com/~lynn/subtopic.html#disk

somewhere in the archives, I even have email with the person granted the patent.

i got involved starting in 80s looking at various kinds of parallel transfers as way of mitigating disk performance bottlenecks.

lots of past posts mentioning disk striping and/or raid:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/96.html#33 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/99.html#200 Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000c.html#24 Hard disks, one year ago today
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2001.html#13 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#34 Competitors to SABRE?
https://www.garlic.com/~lynn/2001.html#35 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#36 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#37 Competitors to SABRE?
https://www.garlic.com/~lynn/2001.html#38 Competitors to SABRE?
https://www.garlic.com/~lynn/2001.html#41 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#42 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#61 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
https://www.garlic.com/~lynn/2001c.html#78 Unix hard links
https://www.garlic.com/~lynn/2001c.html#80 Unix hard links
https://www.garlic.com/~lynn/2001c.html#81 Unix hard links
https://www.garlic.com/~lynn/2001d.html#2 "Bootstrap"
https://www.garlic.com/~lynn/2001d.html#17 "Bootstrap"
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#71 commodity storage servers
https://www.garlic.com/~lynn/2001g.html#15 Extended memory error recovery
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2001n.html#70 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#4 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2002n.html#9 Asynch I/O
https://www.garlic.com/~lynn/2002n.html#18 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003b.html#68 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#70 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#66 FBA suggestion was Re: "average" DASD Blocksize
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003h.html#14 IBM system 370
https://www.garlic.com/~lynn/2003i.html#48 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003i.html#54 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003j.html#64 Transactions for Industrial Strength Programming
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#36 Cray to commercialize Red Storm
https://www.garlic.com/~lynn/2004.html#3 The BASIC Variations
https://www.garlic.com/~lynn/2004b.html#41 SSL certificates
https://www.garlic.com/~lynn/2004c.html#38 ATA drives and vibration problems in multi-drive racks
https://www.garlic.com/~lynn/2004d.html#29 cheaper low quality drives
https://www.garlic.com/~lynn/2004d.html#30 cheaper low quality drives
https://www.garlic.com/~lynn/2004e.html#7 OT Global warming
https://www.garlic.com/~lynn/2004e.html#25 Relational Model and Search Engines?
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#22 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#43 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004k.html#28 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004p.html#38 funny article
https://www.garlic.com/~lynn/2004p.html#59 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005e.html#6 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005e.html#10 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005e.html#11 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005h.html#44 First assembly language encounters--how to get started?
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#4 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005m.html#33 Massive i/o
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#41 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#8 big endian vs. little endian, why?
https://www.garlic.com/~lynn/2005n.html#42 Moz 1.8 performance dramatically improved
https://www.garlic.com/~lynn/2005n.html#51 IPSEC and user vs machine authentication
https://www.garlic.com/~lynn/2005r.html#18 SATA woes
https://www.garlic.com/~lynn/2005t.html#17 winscape?
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006d.html#1 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#3 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#24 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#14 virtual memory
https://www.garlic.com/~lynn/2006o.html#9 Pa Tpk spends $30 million for "Duet" system; but benefits are unknown
https://www.garlic.com/~lynn/2006p.html#47 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
https://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#37 Is this true? (Were gotos really *that* bad?)

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Thu, 21 Dec 2006 11:49:07 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
It surprises me that percentage of memory used by a running program was never a big metric. Things on the application side only started to get interesting when it exceeded 51% and approached 99%. Some people might have run into that problem on disk, but less so main memory. VM clouds that issue (Lynn would argue makes usable more memory [but he would not point out sacrificing performance]). You guys all use such small numbers in the computing sense.

re:
https://www.garlic.com/~lynn/2006x.html#15 The Future of CPUs: What's After Multi-Core?

again do you have a meaning other than what you had typed?

i've repeatedly used examples comparing 360/67 operating in 360/65 mode (w/o hardware relocation) compared to 360/67 operating with dat enable.

basic 360/65 (and 360/67) double word memory cycle time was 750ns (for uniprocessor). DAT (dynamic address translation) added 150ns to that ... or 900ns (multiprocessor calculation got a lot more complex).

that is just the base hardware overhead ... doesn't include associative array miss ... i.e. the 150ns is only for when the virtual->real translation is in the eight entry associative array. it also doesn't include the software overhead of managing the tables ... or software overhead of doing page i/o operations (moving pages into/out of memory).

as to the software overhead ... part of some of my statements with regard to cp67 virtual memory support was i rewrote most of it as an undergraduate in the 60s and reduced the overhead by better than an order of magnitude ... but still didn't make it disappear.

maybe you are confusing some of my statements about having rewritten the code and reduced the software overhead by better than an order of magnitude with my not being able to understand that the overhead is non-zero.

later the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

developed extensive metrics as to all aspects of system performance ... some of it mentioned in collected posts on extensive benchmarking, work load profiling and precursor to what has become capacity planning, virtual memory operation
https://www.garlic.com/~lynn/submain.html#bench

another aspect of it was vs/repack ... which was a application/product for doing extensive program monitoring and implemented semi-automated program reorganization for operation in virtual memory environment ... recent post about vs/repack technology (dating back to published article from 71)
https://www.garlic.com/~lynn/2006x.html#1

in combination there was quite a bit of investigation of things like "weak" and "strong" working sets (something sometimes seen today with processor caches as locality of reference) as well as percentage of total real storage required.

misc. past posts mentiong 750/900ns difference, associative array, etc
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2000.html#88 ASP (was: mainframe operating systems)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000g.html#9 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001.html#71 what is interrupt mask register?
https://www.garlic.com/~lynn/2001c.html#7 LINUS for S/390
https://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002c.html#44 cp/67 (coss-post warning)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002f.html#13 Hardware glitches, designed in and otherwise
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2003.html#13 FlexEs and IVSK instruction
https://www.garlic.com/~lynn/2003g.html#10a Speed of APL on 360s, was Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box??
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003g.html#23 price ov IBM virtual address box??
https://www.garlic.com/~lynn/2003g.html#33 price ov IBM virtual address box??
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2004.html#16 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#53 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#46 IBM 360 memory
https://www.garlic.com/~lynn/2005b.html#62 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#43 A second look at memory access alignment
https://www.garlic.com/~lynn/2005k.html#5 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005s.html#20 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#21 MVCIN instruction
https://www.garlic.com/~lynn/2006.html#15 S/360
https://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006x.html#4 S0C1 with ILC 6
https://www.garlic.com/~lynn/2006x.html#5 S0C1 with ILC 6

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Thu, 21 Dec 2006 12:22:30 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
It surprises me that percentage of memory used by a running program was never a big metric. Things on the application side only started to get interesting when it exceeded 51% and approached 99%. Some people might have run into that problem on disk, but less so main memory. VM clouds that issue (Lynn would argue makes usable more memory [but he would not point out sacrificing performance]). You guys all use such small numbers in the computing sense.

re:
https://www.garlic.com/~lynn/2006x.html#15 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#16 The Future of CPUs: What's After Multi-Core?

again do you have a meaning other than what you typed?

when I was an undergraduate in the 60s, nearly all of the mainframes were real memory systems. i had responsibility for building and supporting the univ. production (real memory) system.

three people from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

the last week in jan68 to deliver cp67 (virtual memory and virtul machine support).

in my spare time between classes and my primary job responsibilities, I got to play a little with cp67. nearly all of the applications to be run under cp67 came over from real memory environment ... so as a result there was constant comparison of how the application ran in the real memory environment vis-a-vis the degradation cp67 due to 1) hardware translation, 2) software virtual memory support and 3) software virtual machine support ... which were all heavily measured and identified as sources of degradation.

these days you have very few A/B comparisons (of the same exact application running with the same exact libraries and operating system) in real memory mode compared to virtual memory mode.

However, in the little spare time that I had to play cp67 ... i did manage to come up with a lot of technology that I was then able to design, implementat and deploy ... recent reference to global LRU replacement algorithm work
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
recent reference to dyanmic adaptive scheduling work
https://www.garlic.com/~lynn/2006x.html#10 The Future of CPUs: What's After Multi-Core?

a few posts referencing part of a conference presentation that i made later in '68 on some of my cp67 enhancements comparing real memory operating system performance ... running on real memory processor and running in virtual address space under cp67 (after having a few months to play with the cp67 source code)
https://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001i.html#33 Waterloo Interpreters (was Re: RAX (was RE: IBM OS Timeline?))
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002f.html#38 Playing Cards was Re: looking for information on the IBM
https://www.garlic.com/~lynn/2002l.html#55 The problem with installable operating systems
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#53 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005n.html#40 You might be a mainframer if... :-) V3.8
https://www.garlic.com/~lynn/2005t.html#8 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2006.html#41 Is VIO mandatory?
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
https://www.garlic.com/~lynn/2006w.html#22 Are hypervisors the new foundation for system software?

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Thu, 21 Dec 2006 15:21:18 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
Because mainframes are centralized fixed fortifications which at best belong to an ancient less distributed era. Distributed computing is hard. At best we have the crudest of fault tolerance built into systems. Parity started as a joke with Cray (for instance). Many programmers have no idea about parity. The communication guys who think about it usually think in terms of checksums not whole systems. From the security/insecurity perceptive the first thing to do if you seriously wanted penetration tools is develop a checksum spoofer. That would be the first of many tools for a workbench.

do you mean what you typed? misc. refs:
https://www.garlic.com/~lynn/2006j.html#45 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006k.html#34 PDP-1
https://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006x.html#15 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#16 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#17 The Future of CPUs: What's After Multi-Core?

there are lots of tales about ruggedized 360s & 370s all over the place.

for minor topic drift recent post referencing distributed lock manager supporting distributed DBMS operation while meeting ACID properties
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallism?

then there is the reference to location that Boyd managed that supposedly was a $2.5B (billion) windfall for IBM ... misc past posts mentioning the windfall
https://www.garlic.com/~lynn/2005m.html#22 Old Computers and Moisture don't mix - fairly OT
https://www.garlic.com/~lynn/2005m.html#23 Old Computers and Moisture don't mix - fairly OT
https://www.garlic.com/~lynn/2005m.html#24 Old Computers and Moisture don't mix - fairly OT
https://www.garlic.com/~lynn/2005t.html#1 Dangerous Hardware
https://www.garlic.com/~lynn/2006q.html#37 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006q.html#38 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006u.html#49 Where can you get a Minor in Mainframe?
https://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?

misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
and misc. URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

you saw a lot of distributed computing starting to happen with 370 138s & 148s ... but it really started to come into full swing with 4331/4341 customer orders in the later 70s ... much more than vax market (from late 70s thru mid-80s). customers were ordering then in blocks of a hundred. one large chip design shop had a single order for nearly 1000. these 4341 "mainframes" frequently didn't go into traditional glasshouse locations (centralized fixed fortifications) ... they went into similar market segment as a many of the vax machines ... except there were a lot more of them. there were stories about locations co-opt'ing large percentage of the conference rooms for locating the machines.

and reference to air force order in late 70s for 210 4341s
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

for comparison a few past posts given vax shipments broken out by year, model, US, world-wide, etc.
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2006k.html#31 PDP-1

there are business critical dataprocessing operations which formulate criteria for their "glass house" operation ... these fixed fortifications frequently are populated with mainframes, in part, because of various of the business critical dataprocessing requirements ... however, many such "glass houses" may also be the location for several hundred (or thousand) corporate "distributed servers" ... again because of the corporations' business critical dataprocessing requirements ... and pretty much totally orthogonal to any networking and/or distributed computing architecture issues.

earlier decades there was some driving factor for distributed computing being co-located at remote sites ... in large part because of constrained communication facilities. however, with advances in communication and networking technology ... there starts to be more latitude and trade-offs with regard to physical location.

some recent posts including old email about physical distributed as well as distributed computing
https://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006s.html#41 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC
https://www.garlic.com/~lynn/2006v.html#11 What's a mainframe?
https://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#23 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#25 Ranking of non-IBM mainframe builders?

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Fri, 22 Dec 2006 08:42:53 -0700
jmfbahciv writes:
Lynn is missing the point a little bit but is getting the idea.

a large part of how machines are deployed and managed have to do with their uses .... there is also big issue of perception.

i've joked before about the perception regarding some companies providing batch or timesharing ... as if it was an either/or scenario.

at one point there was perception that multics provided a lot more timesharing than ibm mainframes ... presumably because there were so many ibm mainframes being used for batch business critical operations.

i've mentioned before that there were significantly larger number of customers using ibm mainframes for batch business critical operations than were using them for strictly timesharing services ... ala cms, cp67, and vm370. and the internal corporate use of ibm mainframes for cp67, cms, and vm370 was smaller number than the total customer use of cp67, cms, and vm370.

at one point
https://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?

I was building highly customized vm370 systems and shipping them directly to internal timesharing corporate installation. these internal timesharing corporate installations that i was directly supporting was only a small percentage of the total number of internal timesharing corporate installations. the total number of internal timesharing corporate installations was smaller than the total nubmer of customer timesharing installations. However, at its peak, the total number of internal timesharing corporate installations that I was building systems for was about the same as the total number of Multics systems that ever existed in the lifetime of Multics operation (it didn't seem to be fair to compare the total number of vm370 systems to the total number of Multics systems ... or even the total number of internal corporate vm370 systems to the total number of Multics systems ... I'm only talking about the number of systems that I directly built compared to the total number of Multics systems).

While the number of IBM mainframes used for such timesharing operations was larger than any other vendor building systems for timesharing use, the perception seems to have been that such use was almost non-existent ... possibly because the number of such timesharing systems was so dwarfed by the number of batch commercial systems (i.e. the number of dedicated timesharing systems wasn't actually that small ... it was that the number of batch commercial systems was so much larger ... which appeared to warp peoples' perception).

misc. past posts mentioning comparing the number of multics timesharing systems
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001h.html#34 D
https://www.garlic.com/~lynn/2003d.html#68 unix
https://www.garlic.com/~lynn/2003g.html#29 Lisp Machines
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2005d.html#39 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#39 How To Abandon Microsoft
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks

Also as pointed at before ... no commercial timesharing offerings ever appeared based on Multics ... however, there were several such commercial timesharing offerings based on vm370 (with some even going back to cp67 days starting in the late 60s).
https://www.garlic.com/~lynn/submain.html#timeshare

some number of these commercial timesharing services even supported commercial corporate applications that would involve extremely sensitive corporate information ... and even hosted different competitive corporate clients (with extremely sensitive operations) on the same platform. as such they had to have fairly significant security measures to kept their (varied) corporate clients operations private/confidential (from other clients and from outsiders) a small example of that was alluded to here
https://www.garlic.com/~lynn/2006v.html#22 vmshare

and as to others with various kinds of integrity and confidentiality requirements ... a small reference here
https://www.garlic.com/~lynn/2006w.html#0 Patent buster for a method that increases password security

and as referred to here ... some amount of the physical security & integrity offered a computing platform ... may reflect the applications being used and/or the data being processed ... as opposed to being inherently a characteristics of the hardware.
https://www.garlic.com/~lynn/2006x.html#18 The Future of CPUs: What's After Multi-Core?

To some extent, how tools are used and deployed ... will reflect what it is the tools are being used for. If tools are for extreme business critical dataprocessing ... the deployment and operation can be different than when a similar platform is used for less critical purposes.

I've referenced before that circa 1970 ... the science center started on joint project with endicott to add support for 370 virtual machines with 370 virtual memory ... to cp67 kernel (running on 360/67). this project included using network link between endicott and cambridge for form of distributed development ... referenced here
https://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006w.html#3 IBM sues make of Intel-based Mainframe clones

... as well as early driver for the development of cms multi-level source management referenced here
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2006w.html#48 vmshare

At the time, information that virtual memory support was going to be available on 370 machines was a closely guarded corporate secret ... and required significant security measures.

Another undertaking at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

was the porting of apl\360 to cms. while the existance of apl on cms wasn't a closely guarded corporate secret ... cambridge offered online cms\apl services to internal corporate users. typical apl\360 offerings at the time offered users 16kbyte (or possibly 32kbyte) workspaces. cms\apl opened that up so that several mbyte workspaces were then possible. This enabled using cms\apl for some real-live business modeling uses (some of the stuff typically done with spreadsheets today) ... and cambridge got some corporate hdqtrs users ... who proceeded to load the most sensitive corporate business data on the cambridge system (in support of their business modeling activities).

the 370 virtual memory activity and the corporate business modeling involved some of the most senstive and critical corporate information of the period ... being hosted on the cambridge cp67 system. At the same time, there was significant use of the cambridge cp67 system by people (students and others) at various universities in the cambridge area (mit, harvard, bu, ne, etc). This created some amount of security tension ... having to provide the very highest level of corporate security protection on the same platform that was being used by univ. students and other non-employees.

misc. past mention of cambridge cp67 system and security exposure
https://www.garlic.com/~lynn/2001i.html#44 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2002h.html#60 Java, C++ (was Re: Is HTML dead?)
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#20 address space
https://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
https://www.garlic.com/~lynn/2006h.html#14 Security
https://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006o.html#19 Source maintenance was Re: SEQUENCE NUMBERS

"The Elements of Programming Style"

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "The Elements of Programming Style"
Newsgroups: alt.folklore.computers
Date: Fri, 22 Dec 2006 09:15:07 -0700
"John Coleman" <jcoleman@franciscan.edu> writes:
2) The following passage struck me as ironic in view of the fact that Kernighan subsequently had his name indelibly attached to C, a language which was to become notorious for subscript out of range bugs (especially in the guise of buffer overflows):

"Some compilers (WATFIV, PL/I with SUBSCRIPTRANGE enabled, for instance) allow a check during execution that subscripts do not exceed array dimensions... [but] many programmers do not use such compilers because 'They're not efficient' (Presumably this means that it is vital to get the wrong answer quickly.)" pg. 85. Here, I think, the irony is only apparent given the original target of C as a systems programming language on minicomputers (as opposed to application on main frames) where the added effeciency was worth the extra debugging time required to get it right. But nowadays I can't see why anyone wouldn't use runtime subscript checking.


I've contended that far more egregious than subscript out of range has been implicit string lengths found extensively in lots of string libraries and application code. the use of implicit lengths ... including null terminated strings aggravates the situation.

lets say it is implicitly part of the "C" programming style.

PLI and many system implementations have conventions where buffers have actual lengths ... both max and current. simple operation of copying a string from one place to another ... there is an explicit length of the origin string and an explicit max. length of the target ... and string copy/move operations explicitly support the lengths ... i.e. lots of buffer overflows aren't program subscript at a time ... it involves move/copy that is apparently totally ignorant of the target location size as well as the origin location size. With explicit lengths, some large number of buffer overflows can never occur whether or not SUBSCRIPTTRANGE is enabled.

misc. past post on buffer overflows
https://www.garlic.com/~lynn/subintegrity.html#overflow

"The Elements of Programming Style"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "The Elements of Programming Style"
Newsgroups: alt.folklore.computers
Date: Fri, 22 Dec 2006 09:34:28 -0700
"John Coleman" <jcoleman@franciscan.edu> writes:
1) One of the most interesting passages I read was:

"Most of the IF-ELSE examples we have shown so far have a characteristic in common. Each approximates, as closely as the programmer could manage, a minimum depth decision tree for the problem at hand ... But a program is a one-dimensional construct, which obscures any two-dimensional connectness it may have." (pg 46). This passage suggested to me that maybe one of the sources of spaghetti code was the use of flow charts in algorithm design, since flow-charts are 2-dimensional. Is there any truth to this? To what extent did programmers actually *use* flow charts? Since I am a mathematician, I thought of a mathematical analogy - any graph (system of nodes and edges) can be embedded in 3 - dimensional space without a hitch, but if you try to draw them in 2 dimensions, some graphs (the non-planar ones) force you to introduce edge-crossings which detract from the readability of the diagram. Maybe gotos are analogous to edge-crossings - the cost of representing a structure in a space with fewer dimensions.


for a different case ... in the early 70s I wrote a PLI program that analyzed (360/370) assembler listings, attempted to construct program flow, register use before set, and also generate psuedo higher level language ... assembler was all branch (aka goto) ... but i tried to interpret the program flow logic as if/then/else, do/while/until, constructs.

for some straight-foward application flow ... it turned into relatively straight-forward readable code. however, there was some highly optimized kernel routines where the if/then/else/do/while/until could be less clear than the original assembler. it typically involved a larger amount of state testing with all sorts of condiitional processing. I eventually implemented a nested threshold limit ... and dropped back to GOTOs when the threashold was exceeded.

recent post about some of the Fort Knox people (effort to replace many of the internal corporate microprocessors with 801) that were looking at using 801 for low/mid range 370s ... and possibility of doing 370 machine language program analysis and a form of JIT.
https://www.garlic.com/~lynn/2006u.html#29 To RISC or not to RISC
https://www.garlic.com/~lynn/2006u.html#31 To RISC or not to RISC
https://www.garlic.com/~lynn/2006u.html#32 To RISC or not to RISC

and further fort knox drift
https://www.garlic.com/~lynn/2006u.html#37 To RISC or not to RISC
https://www.garlic.com/~lynn/2006u.html#38 To RISC or not to RISC

and past thread about some of the characteristics/issues with doing such machine instruction analysis and if/else program flow construction
https://www.garlic.com/~lynn/2006e.html#32 transputers again was: The demise of Commodore
https://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?

and then there is other topic drift on subject of 3-value logic
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004f.html#2 Quote of the Week
https://www.garlic.com/~lynn/2004l.html#75 NULL
https://www.garlic.com/~lynn/2005.html#15 Amusing acronym
https://www.garlic.com/~lynn/2005b.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005i.html#35 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005m.html#19 Implementation of boolean types
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005t.html#20 So what's null then if it's not nothing?
https://www.garlic.com/~lynn/2005t.html#23 So what's null then if it's not nothing?
https://www.garlic.com/~lynn/2005t.html#33 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005u.html#12 3vl 2vl and NULL
https://www.garlic.com/~lynn/2006e.html#34 CJ Date on Missing Information
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#23 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#29 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006s.html#27 Why these original FORTRAN quirks?

'Innovation' and other crimes

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 'Innovation' and other crimes
Newsgroups: alt.folklore.computers
Date: Fri, 22 Dec 2006 10:30:38 -0700
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
One of the hot items in current debates at Vancouver city hall is whether to pass a bylaw requiring all dumpsters to be locked. Can't have people picking through them, y'know - they might actually reuse things. And as we all know, waste is the surest sign of a prosperous economy.

dumpster diving has been a major source of account fraud, identity theft, industrial espionage, etc ... it has also given rise to big increase in the sales of shredders for the home market.

... all kinds of drift ...

lots of past posts about threats, exploits, vulnerabilities, and fraud
https://www.garlic.com/~lynn/subintegrity.html#fraud

Multiple mappings

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple mappings
Newsgroups: comp.arch
Date: Fri, 22 Dec 2006 10:52:15 -0700
"Eric P." <eric_pattison@sympaticoREMOVE.ca> writes:
With 2.6, the kernel is mapped into all processes (as most other OS do) but it may have retained some of that original mapping functionality from previous versions for driver compatibility.

modulo the virtual machine monitors ... dating back to cp40 in the mid-60s.

MVS inherited global mapping from its real-memory ancestors ... which had paradigm that was also extensively pointer-passing (not only the kernel ... but all the subsystem services that existed outside of the kernel ... everything essentially existing in a single *real* address space).

There was a difficult period for MVS before the move from 24-bit virtual address space to 31-bit virtual address spaces ... where the system mapping could take 12-13 of an application's virtual 16mbyte (in some cases leaving only 3mbytes for the application).

for lots of topic drift ...

While the MVS kernel got global mapping ... there was a problem with the non-kernel, semi-priviledged, subsystem services (things that might look like "demons" in other environments) ... which got their own individual "application" address space. The problem was that the paradigm was still extensively pointer-passing ... even between general applications and subsystem services.

The initial cut was a "COMMON" segment ... something that appeared in all address spaces ... where an application could stuff some parameters and then subsystem service could access them using passed pointer.

The next cut at addressing this was dual-address space mode ... actually having two active virtual address spaces simultaneously ... pointer was passed to subsystem service in a different address space ... and that subsystem service could use the pointer to reach directly into the application address space. The downside was that it still required a kernel transition to get to/from the subsystem service.

Dual-address space was then generalized to multiple address space and the "program call" function was added. program call was hardware defined table of subsystem functions and some operations that happen (like how the virtual address space pointers change) ... it allows transition directly from application address space into subsystem address space (and back) w/o requiring a software transition thru the kernel, pointer-passing paradigm continues ... and there is now all sorts of cross virtual address space activity going on.

IBM sues maker of Intel-based Mainframe clones

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: IBM sues maker of Intel-based Mainframe clones
Newsgroups: bit.listserv.ibm-main
Date: 22 Dec 2006 10:12:18 -0800
Nigel Hadfield wrote:
And a wonderful company called Enterprise Computer Services made a very good living for a number of years upgrading, downgrading, and crossgrading 3090s, by doing just that with IBM's engines. Made much easier once a good late friend and colleague had essentially hacked the VM system that was the 3090 console.

i.e. "3090" service processor was a modified version of vm370 release 6 running on a pair of 4361 processors, most of the screens/menus written in IOS3270. Part of this was the result of the experience with the 3081 service processor where all of the software was totally written from scratch (trying to get to some amount of off-the-shelf stuff).

minor folklore ... my dumprx was selected for use as diagnostic (software system) support for the 3090 service processor ... in the early 80s, i had (re-)implemented large superset of IPCS function totally in rexx
https://www.garlic.com/~lynn/submain.html#dumprx

... topic drift, a couple recent postings mentioning vm370 release 6
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape
https://www.garlic.com/~lynn/2006t.html#24 CMSBACK
https://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC

... for other drift ... this is crudely HTML'ed version of GCARD IOS3270
https://www.garlic.com/~lynn/gcard.html

Executing both branches in advance ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Executing both branches in advance ?
Newsgroups: comp.arch
Date: Fri, 22 Dec 2006 16:16:13 -0700
"Stephen Sprunk" <stephen@sprunk.org> writes:
This reminds me of the stat IBM gave that adding SMT cost them 25% more transistors but only gained 24% in overall performance. Is that really a win?

similar question comes up all the time ... if the processor chip is something like 10percent of overall system cost ... and if the additional 25percent transistors represents on the order of 2.5percent increase in overall system costs ... and the 24percent gain in overall performance translates into 24percent gain in overall system thruput ... then its a win (modulo lost opportunity ... i.e. the SMT effort didn't interfer doing something else that represented even a larger overall system thruput gain ... aka larger ROI ... return-on-investment).

if processor thruput is primary system bottleneck and processor chip is on the order of 1/10th the overall system cost ... then you may get 10:1 ROI ... improvement in overall system thruput ... for investment in processor improvement. the question becomes is there anything else, for the same investment, that results in a bigger improvement in overall system thruput.

there is an analogous discussion about multiprocessor configurations regarding the incremental system thruput improvement for each additional processor added ... this is a thread on mainframe LSPR ratios ... avg. configuration MIP thruput numbers for different mainframe models as number of processors in the configuration are increased
https://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons
and
http://www.redbooks.ibm.com/abstracts/sg246645.html

from above ...

a 2094-701 (one processor) is rated at 608 SI MIPs and a 2094-702 (two processor) is rated at 1193 SI MIPs ... for an increase of 585 SI MIPs for twice the processors ... 100percent increase in processor hardware represents only 96percent increase in processor thuput.

a 2094-731 (31 processors) is rated at 11462 SI MIPs and a 2094-732 (32 processors) is rated at 11687 SI MIPs or an increase of 225 SI MIPs. At 32 processor configuration, adding the equivalent of a 608 SI MIP processor represents only a 225/608 or 37percent effective system thruput increase (compared to a single processor). that is way less than equivalent of adding 25percent processor circuits and getting 24percent effective processor thruput increase.

24/25 is effective net benefit of 96percent ... which is similar net benefit form going from 2094-701 (one processor) to a 2094-702 (two processor) configuration (100percent increase in number of processor circuits for 96percent improvement in thruput).

and for what-ever reason, various installations continue to incrementally add processors ... even as the incremental benefit of each added processor continues to drop. Part of the issue is that overall system thruput improvement is measured as a percentage of total system costs ... which can still be significantly better than each incremental processor cost.

lots of past multiprocessor postings
https://www.garlic.com/~lynn/subtopic.html#smp

Multiple mappings

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple mappings
Newsgroups: comp.arch
Date: Fri, 22 Dec 2006 16:57:12 -0700
"mike" <mike@mike.net> writes:
It is interesting to observe how all this complexity disappears when the kernel and all user processes exist in a single humongous virtual address space as it does in CPF from the System/38...OS/400...i5/OS. In that system all files, all programs, and every other type of object is mapped into virtual memory and a read is just a page fault. There is no program relocation and no mapping at all.

Similar technology is used by an experimental system under development by Microsoft R&D which is almost entirely written in C# and so called "managed code" to provide higher levels of security than is possible with traditional designs.


modulo if you actually were to do straight-forward memory mapping and then actually demand page faulted every page in (4k transfer at a time).

this is one of problems with tss/360 ... on 360/67 ... vis-a-vis cp67 with cms on the same hardware. even tho cms had many of the problems with program relocation and no paged mapped filesystem ... it would attempt to do program loading in up to 64kbyte disk transfers (instead of 4kbyte at a time on a demand page fault basis).

tss/360 also had other issues, like bloat ... excessively long pathlengths as well as fixed kernel real storage requirements that possibly represented 80percent of total real storage (not only excessive demand page faulting but excessive fixed kernel size contributing to page thrashing)

one of the things that i tried to do when i implemented page mapped filesystem for cms in the early 70s ... was to preserve the benefits of large block disk transfers for things like program loading ... and avoid regressing to single 4k at a time demand page faulting.
https://www.garlic.com/~lynn/submain.html#mmap

part of the trick is to have page mapping but still preserve paradigm supporting large block transfers (and minimize degenerating too much to 4k at a time demand page faults).

for some trivia ... s/38 is supposedly some of the refugees retreating to rochester after the future system effort failed (future system including concept of one level store ... some of it coming from tss/360 effort) ... misc. future system postings
https://www.garlic.com/~lynn/submain.html#futuresys

of course even after doing page mapped filesystem for cms ... there was still a large part of the infrastructure that was oriented around program relocation paradigm ... various postings on the trials and tribulations attempting to deal with those problems
https://www.garlic.com/~lynn/submain.html#adcon

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 08:00:26 -0700
jmfbahciv writes:
Yes, you are getting it ;-). How do you change the users' perception? If our scientists have been trained that compute-bound runs are the equivalent to immovable gear, you cannot suddenly tell them to move their job mid-run to Arkansas when a hurricane is crawling up the Florida coast.

i've told before the scenario of air-bearing simulation as part of floating disk head design (that came out in 3380 disks)

sjr had a 370/195 running mvt for batch jobs ... job queue backlog could be several weeks.

i've also mentioned somebody at palo alto science center getting turn around ever 2-3 months. palo alto science center had 370/145 for vm timesharing. they basically configured their job for background running on vm (soak up spare cycles mostly offshift and weekends). they found that they would get slightly more computation in three month period out of their 370/145 background than the turn-around on the sjr 370/195 (370/195 was peak around 10mips for appropriately tuned computational jobs ... compared to around 300kips for 370/145).

anyway ... i was playing part time in bldg14 (disk engineering) and bldg15 (disk product test).
https://www.garlic.com/~lynn/subtopic.html#disk

they both had dedicated 370s for running various kinds of dedicated stand alone regression testing. much of this was engineering and prototype hardware. they had done some experimenting with trying to run MVS operating system on these machines (being able to use the operating system support for testing more than one device concurrently) ... but found the MTBF for MVS was about 15 minutes (hang or crash) just testing a single device (i.e. the operational and failure characteristics of engineering and prototype devices was extremely hostile to normal operating system operation).

so I thot this might be interesting problem and took to rewrite the operating system i/o subsystem to be never fail ... regardless what the i/o devices did, the i/o subsystem would not precipitate a system crash or hang. the result was that they could now use the processors to do regression testing on multiple different devices concurrently (rather than having carefully scheduled dedicated processor time for testing each device).

the interesting scenario was that they now had all these processors ... and even testing multiple concurrent devices ... only a couple percent of each processor was being used. that left a whole lot of spare processor cycles that could be used for other stuff.

so things got a little more interesting when bldg. 15 (product test lab) got an early engineering 3033. the 370/195 would hit 10mips for carefully tuned applications but normally ran around 5mips for most codes (mostly branch instructions stalling the pipeline). the 3033 ran about 4.5mips. so the air-bearing simulation work was maybe getting a couple hrs a month on the 370/195 ... but all of a sudden we could move it off the 370/195 in bldg. 28 over to the 3033 in bldg. 15 and provide it with almost unlimited 3033 time. Now, not only were we enabling a lot of concurrent device testing, effectivley on demand (that previously required scheduled dedicated time), but also able to provide nearly unlimited computational time for the air-bearing simulation work.

for topic drift, bldg. 14 was where the person that got the raid patent in the 70s was located ... misc. recent posts this year mentioning raid:
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#1 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#3 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#24 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#26 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006h.html#34 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#14 virtual memory
https://www.garlic.com/~lynn/2006o.html#9 Pa Tpk spends $30 million for "Duet" system; but benefits are unknown
https://www.garlic.com/~lynn/2006p.html#9 New airline security measures in Europe
https://www.garlic.com/~lynn/2006p.html#47 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006t.html#5 Are there more stupid people in IT than there used to be?
https://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
https://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#37 Is this true? (Were gotos really *that* bad?)
https://www.garlic.com/~lynn/2006x.html#15 The Future of CPUs: What's After Multi-Core?

...

and although shugart left before i got there, i believe shugart's offices would have also been in bldg. 14 (recent mention of shugart's passing) ... misc. past posts mentioning shugart
https://www.garlic.com/~lynn/2000.html#9 Computer of the century
https://www.garlic.com/~lynn/2002.html#17 index searching
https://www.garlic.com/~lynn/2002l.html#50 IBM 2311 disk drive actuator and head assembly
https://www.garlic.com/~lynn/2004.html#5 The BASIC Variations
https://www.garlic.com/~lynn/2004j.html#36 A quote from Crypto-Gram
https://www.garlic.com/~lynn/2004l.html#14 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004p.html#0 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005c.html#9 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005h.html#37 Software for IBM 360/30
https://www.garlic.com/~lynn/2006n.html#30 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?

...

misc. past posts mentioning air-bearing simulation work
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#13 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#18 virtual memory
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 08:23:45 -0700
jmfbahciv writes:
Yes, you are getting it ;-). How do you change the users' perception? If our scientists have been trained that compute-bound runs are the equivalent to immovable gear, you cannot suddenly tell them to move their job mid-run to Arkansas when a hurricane is crawling up the Florida coast.

re:
https://www.garlic.com/~lynn/2006x.html#19 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#27 The Future of CPUs: What's After Multi-Core?

for some, possibly as important as various compute-bound stuff, is a lot of the business critical dataprocessing (things like payroll, contracts, disbursements, etc)

there was news article in the last couple months about DFAS, after having been temporarily relocated to Phili (from new orleans in the wake of the hurricane/flood) is now being moved to denver.

DFAS home page
http://www.dod.mil/dfas/

"The Elements of Programming Style"

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "The Elements of Programming Style"
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 09:05:37 -0700
CBFalconer <cbfalconer@yahoo.com> writes:
With suitable languages (e.g. Pascal) and close typing the compiler has enough information to avoid redundant checking. There have been tests of this, and the overhead was reduced to about a 5% or smaller increase in run time. I can't give references, since my library is in boxes somewhere.

re:
https://www.garlic.com/~lynn/2006x.html#20 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006x.html#21 "The Elements of Programming Style"

the mainframe tcp/ip product had been implemented in vs/pascal ... i had done the enhancements to support rfc 1044 ... which had significantly higher thruput than the base system (1mbyte/sec vis-a-vis 44kbytes/sec ... and something like 3-4 orders of magnitude bytes per processor time efficiency)
https://www.garlic.com/~lynn/subnetwork.html#1044

as far as i knew, the vs/pascal implementation never had any buffer overflow exploits ... the ingrained paradigm was that all string and buffer operations were always done with explicit lenghts.

in much of the 90s, the major cause of exploits on the internet was buffer overflow associated with code implemented in C language. common convention has been that static and/or dynamic buffers are specified by the programmer ... and these buffers carry no associated length information ... the management of buffer related length information is forced on the programmer ... and requires frequent and constant diligenece (in every single operation involving such buffers). in pascal, the buffer length information is part of the characteristic of the buffer ... and most operations directly utilize the available buffer length information w/o additional programmer assistance ... it happens automatically as part of the semantics of the operations.

in C, there is widely used programming practices where the programmer either fails to provide for explicit length-sensitive operations or makes a mistake in providing for explicit length-sensitive operations (in pascal, the semantics were such that it didn't require the programmer to do or not do something ... it just happened automatically ... and as a result, there was were fewer operations where it was possible for a programmer to make a mistake or not remember to do something, aka it was automatic part of the infrastructure and frequently the semantics of the operation).

as automatic scripting become more prevalent the exploit statistics shifting to about half buffer overflows and half automatic scripting by 1999. with the rise of things like phishing, the exploits statistics then shifting to 1/3rd social engineering, 1/3rd automatic scripting, and 1/3rd buffer overflows.

I tried to do some analysis of some of the exploit & vulnerabilitiy data bases. However, the descriptive information has been free form and tended to have a great deal of variability. since then there have been a number of efforts to introduce a lot more standardization in exploit/vulnerability descriptions.

posting with some analysis of CVE entries
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
https://www.garlic.com/~lynn/2004q.html#74 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005k.html#3 Public disclosure of discovered vulnerabilities

postings about some published exploit reports coming up with nearly same percentages as my analysis (for buffer overrun/overflow)
https://www.garlic.com/~lynn/2005b.html#20 Buffer overruns
https://www.garlic.com/~lynn/2005c.html#28 Buffer overruns

and past posts mentioning buffer related exploits
https://www.garlic.com/~lynn/subintegrity.html#overflow

"The Elements of Programming Style"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "The Elements of Programming Style"
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 09:21:59 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
and then there is other topic drift on subject of 3-value logic

re:
https://www.garlic.com/~lynn/2006x.html#21 "The Elements of Programming Style"

and 3-or-4 value logic

here is previous post in a buffer overrun thread
https://www.garlic.com/~lynn/2005b.html#17 Buffer overruns

making some reference to analysing 360/370 assembler listings and attempting to produce higher-level program flow constracts.

as mentioned in the above post one of the issues was that 360/370 instructions provided for two-bit condition (four states) and there was numerous code segments (especially in highly optimized kernel code) which made use of 3 and/or 4 "value" logic ... there was condition setting and then 3 or 4 resulting code paths as the result of the condition setting.

the if/then construct provided for much simpler binary true/false type of logic operations.

that is in addition to some of the complex branch implementations were actually much simpler to understand than attempting to represent them with if/then binary logic nested 20-30 (or sometimes more) deep.

past posts mentioning buffer overruns
https://www.garlic.com/~lynn/subintegrity.html#overflow

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 09:46:30 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
so things got a little more interesting when bldg. 15 (product test lab) got an early engineering 3033. the 370/195 would hit 10mips for carefully tuned applications but normally ran around 5mips for most codes (mostly branch instructions stalling the pipeline). the 3033 ran about 4.5mips. so the air-bearing simulation work was maybe getting a couple hrs a month on the 370/195 ... but all of a sudden we could move it off the 370/195 in bldg. 28 over to the 3033 in bldg. 15 and provide it with almost unlimited 3033 time. Now, not only were we enabling a lot of concurrent device testing, effectivley on demand (that previously required scheduled dedicated time), but also able to provide nearly unlimited computational time for the air-bearing simulation work.

re:
https://www.garlic.com/~lynn/2006x.html#27 The Future of CPUs: What's After Multi-Core?

the product test lab also got early engineering model of 4341. it turned out at that time, I had better access to 4341 for running various things than the endicott performance metrics group (responsible for doing performance profiling of the 4341). as a result I got asked to make various runs on the bldg. 15 4341 in support of various activities.

for instance the previous post in this thread
https://www.garlic.com/~lynn/2006x.html#18 The Future of CPUs: What's After Multi-Core?

mentioned customers snarfing up 4331s and 4341s in orders of multi-hundred at a time. the above post also references an older post/email
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

where the air force data systems (that was a multics installation) ordering 210 in the late 70s.

few old posts mentioning doing benchmarks for endicott on bldg. 15s 4341
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)

one of the benchmarks done for endicott was rain&rain4 (from national lab) ... results from late 70s:

Following are times for floating point fortran job running same fortran


                    158               3031              4341

Rain              45.64 secs       37.03 secs         36.21 secs
Rain4             43.90 secs       36.61 secs         36.13 secs

also times approx;
                   145                168-3              91
                   145 secs.          9.1 secs          6.77 secs

rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
35.77 secs.

... snip ...

effectively customers were ordering hundreds (or thousands) of machines at a time (that were approx. equivalent of 6600).

Toyota set to lift crown from GM

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Toyota set to lift crown from GM
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 10:05:24 -0700
Toyota set to lift crown from GM
http://www.iht.com/articles/2006/12/22/business/toyota.php

from above:

TOKYO: Toyota Motor said Friday that it planned to sell 9.34 million vehicles next year, a figure that analysts said would be big enough to put it ahead of the troubled General Motors as the world's largest auto company.

... snip ...

sort of an update on past threads this year in a.f.c. mentioning us automobile industry
https://www.garlic.com/~lynn/2006.html#23 auto industry
https://www.garlic.com/~lynn/2006.html#43 Sprint backs out of IBM outsourcing deal
https://www.garlic.com/~lynn/2006.html#44 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
https://www.garlic.com/~lynn/2006m.html#49 The Pankian Metaphor (redux)

NSFNET (long post warning)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: NSFNET (long post warning)
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 10:42:50 -0700
while the arpanet 1jan83 switch-over to internetworking protocol was somewhat the technology basis for the internet, the operational basis for the modern internet (large scale internetworking and internetworking backbone) would be NSFNET.

references here
https://www.garlic.com/~lynn/internet.htm#nsfnet
https://www.garlic.com/~lynn/rfcietf.htm#history

the pre-1jan83 infrastructure failed to scale-up very well ... especially compared to the internal corporate network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

with issues like needing to perform synchronized, system-wide network node maint
https://www.garlic.com/~lynn/2006k.html#10 Arpa address

and continuing "high-rate" of growth, expecting to reach 100 nodes by (sometime in) 1983 (from arpanet newsletter) at a point when the internal network was approaching 1000 nodes
https://www.garlic.com/~lynn/2006r.html#7 Was FORTRAN buggy?
recent post discussing some of the issues
https://www.garlic.com/~lynn/2006x.html#8 vmshare

some recent NSFNET related postings:
https://www.garlic.com/~lynn/2005d.html#13 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
https://www.garlic.com/~lynn/2006w.html#43 IBM sues maker of Intel-based Mainframe clones

and old NSFNET/HSDT related email (in sequence)
https://www.garlic.com/~lynn/2006t.html#email850506
https://www.garlic.com/~lynn/2006w.html#email850607
https://www.garlic.com/~lynn/2006t.html#email850930
https://www.garlic.com/~lynn/2006t.html#email860407
https://www.garlic.com/~lynn/2006s.html#email860417

and was leading up to a number of activities, including scheduling the meeting referenced here
https://www.garlic.com/~lynn/2005d.html#email860501

then the people that would be attending the meeting were called and told the meeting was canceled ... accompanied by some number of efforts to push SNA/VTAM solutions on NSF ... as referenced in this email
https://www.garlic.com/~lynn/2006w.html#email870109

there was some followup efforts by NSF to extract HSDT activity under a number of different guises, old email reference:
https://www.garlic.com/~lynn/2006s.html#email870515

lots of past posts mentioning various HSDT (high speed data transport) project related activities over the years
https://www.garlic.com/~lynn/subnetwork.html#hsdt

part of HSDT included having done the RFC 1044 implementation for the mainframe tcp/ip product
https://www.garlic.com/~lynn/subnetwork.html#1044

later some of the HSDT project activity morphed into 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier

and along the way, we had opportunity to play in high speed protocol activities
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
and another position from the dumb terminal communication operation
https://www.garlic.com/~lynn/2006i.html#17 blast from the past on reliable communication
from this email
https://www.garlic.com/~lynn/2006i.html#email890901

other topic drift regarding the dumb terminal communication operation
https://www.garlic.com/~lynn/2006x.html#7 vmshare
https://www.garlic.com/~lynn/2006x.html#8 vmshare

my wife had done a stint in POK in charge of loosely-coupled architecture (mainframe for cluster)
https://www.garlic.com/~lynn/submain.html#shareddata

we used some amount of that along with hsdt and 3-tier when we started the HA/CMP product project
https://www.garlic.com/~lynn/subtopic.html#hacmp

more topic drift, some recent posts about HA/CMP scale-up with MEDUSA:
https://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?

for other drift ... two of the people that were at this HA/CMP scale-up meeting
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15
later showed up responsible for something called a commerce server at a small client/server startup. they wanted to process payment transactions on their server and we got called in as consultants
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
for something that came to be called electronic commerce or e-commerce

misc. past posts mentioning NSFNET
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#59 Ok Computer
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#37b Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#40 [netz] History and vision for the future of Internet - Public Question
https://www.garlic.com/~lynn/99.html#138 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#78 Free RT monitors/keyboards
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#19 Comrade Ronda vs. the Capitalist Netmongers
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#71 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#73 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#74 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#45 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#85 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002k.html#12 old/long NSFNET ref
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002l.html#13 notwork
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002o.html#41 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2003c.html#5 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#11 Networks separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#46 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003g.html#36 netscape firebird contraversy
https://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?
https://www.garlic.com/~lynn/2003l.html#58 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2004b.html#46 ARPAnet guest accounts, and longtime email addresses
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#1 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#3 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#5 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#7 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004m.html#62 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005e.html#46 Using the Cache to Change the Width of Memory
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2005n.html#28 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#30 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005p.html#10 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005p.html#16 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#3 winscape?
https://www.garlic.com/~lynn/2005q.html#6 What are the latest topic in TCP/IP
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
https://www.garlic.com/~lynn/2005q.html#46 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#32 How does the internet really look like ?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2005t.html#3 Privacy issue - how to spoof/hide IP when accessing email / usenet servers ?
https://www.garlic.com/~lynn/2005u.html#53 OSI model and an interview
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006e.html#38 The Pankian Metaphor
https://www.garlic.com/~lynn/2006e.html#39 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#12 Barbaras (mini-)rant
https://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?
https://www.garlic.com/~lynn/2006i.html#21 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2006j.html#43 virtual memory
https://www.garlic.com/~lynn/2006j.html#46 Arpa address
https://www.garlic.com/~lynn/2006m.html#10 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2006r.html#6 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006s.html#20 real core
https://www.garlic.com/~lynn/2006s.html#51 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#55 What's a mainframe?
https://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006v.html#30 vmshare
https://www.garlic.com/~lynn/2006w.html#26 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#29 Descriptive term for reentrant program that nonetheless is
https://www.garlic.com/~lynn/2006w.html#38 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#39 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2006x.html#7 vmshare

=====

misc. past posts mentioning the 1jan83 switchover to internetworking protocol
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2001b.html#81 36-bit MIME types, PDP-10 FTP
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001j.html#28 Title Inflation
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#6 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002q.html#4 Vector display systems
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003g.html#44 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#51 vnet 1000th node anniversary 6/10
https://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#16 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#17 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003i.html#32 A Dark Day
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003l.html#0 One big box vs. many little boxes
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003n.html#44 IEN 45 and TCP checksum offload
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004e.html#30 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004f.html#35 Questions of IP
https://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#8 network history
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004g.html#26 network history
https://www.garlic.com/~lynn/2004g.html#30 network history
https://www.garlic.com/~lynn/2004g.html#31 network history
https://www.garlic.com/~lynn/2004g.html#32 network history
https://www.garlic.com/~lynn/2004g.html#33 network history
https://www.garlic.com/~lynn/2004k.html#30 Internet turns 35, still work in progress
https://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004m.html#26 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#62 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#42 Longest Thread Ever
https://www.garlic.com/~lynn/2004n.html#43 Internet turns 35 today
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2004q.html#44 How many layers does TCP/IP architecture really have ?
https://www.garlic.com/~lynn/2004q.html#56 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2005.html#45 OSI model and SSH, TCP, etc
https://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005d.html#44 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#63 Cranky old computers still being used
https://www.garlic.com/~lynn/2005e.html#39 xml-security vs. native security
https://www.garlic.com/~lynn/2005e.html#46 Using the Cache to Change the Width of Memory
https://www.garlic.com/~lynn/2005f.html#11 Mozilla v Firefox
https://www.garlic.com/~lynn/2005f.html#53 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005i.html#37 Secure FTP on the Mainframe
https://www.garlic.com/~lynn/2005k.html#5 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2005n.html#16 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#36 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#47 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/2005n.html#52 ARP routing
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005p.html#16 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#0 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#3 winscape?
https://www.garlic.com/~lynn/2005q.html#6 What are the latest topic in TCP/IP
https://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
https://www.garlic.com/~lynn/2005r.html#32 How does the internet really look like ?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2005t.html#3 Privacy issue - how to spoof/hide IP when accessing email / usenet servers ?
https://www.garlic.com/~lynn/2005u.html#56 OSI model and an interview
https://www.garlic.com/~lynn/2006b.html#12 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#35 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2006j.html#45 Arpa address
https://www.garlic.com/~lynn/2006j.html#49 Arpa address
https://www.garlic.com/~lynn/2006k.html#1 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006k.html#6 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006k.html#10 Arpa address
https://www.garlic.com/~lynn/2006k.html#12 Arpa address
https://www.garlic.com/~lynn/2006k.html#32 PDP-1
https://www.garlic.com/~lynn/2006k.html#40 Arpa address
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006k.html#43 Arpa address
https://www.garlic.com/~lynn/2006k.html#53 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006k.html#56 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#52 Mainframe Linux Mythbusting
https://www.garlic.com/~lynn/2006n.html#5 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#6 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#8 vmshare

Year-end computer bug could ground Shuttle

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Year-end computer bug could ground Shuttle
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 12:26:08 -0700
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
Perhaps an independent financial system should be added to the usual list of executive arm, upper house, lower house, and independent judiciary for a functional modern nation.

you ever refer to federal reserve bank as agency of the federal gov. to anybody from the federal reserve ... and listen to what they have to say ... misc. federal reserve bank websites (notice they went for ".org" domain not ".gov" domain):
http://www.frbsf.org/
http://www.richmondfed.org/
http://www.stls.frb.org/

minor topic drift ... merged payment taxonomy and glossary ... including terms from federal reserve website
https://www.garlic.com/~lynn/index.html#glosnote

"The Elements of Programming Style"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "The Elements of Programming Style"
Newsgroups: alt.folklore.computers
Date: Sat, 23 Dec 2006 12:40:50 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
re:
https://www.garlic.com/~lynn/2006x.html#20 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2006x.html#21 "The Elements of Programming Style"

the mainframe tcp/ip product had been implemented in vs/pascal ... i had done the enhancements to support rfc 1044 ... which had significantly higher thruput than the base system (1mbyte/sec vis-a-vis 44kbytes/sec ... and something like 3-4 orders of magnitude bytes per processor time efficiency)
https://www.garlic.com/~lynn/subnetwork.html#1044

as far as i knew, the vs/pascal implementation never had any buffer overflow exploits ... the ingrained paradigm was that all string and buffer operations were always done with explicit lenghts.


re:
https://www.garlic.com/~lynn/2006x.html#29 "The Elements of Programming Style"
and
https://www.garlic.com/~lynn/2006x.html#30 "The Elements of Programming Style"

one of the comparisons i draw with regard to the tendency to programmers to make mistakes with regard to string & buffer length management in typical c language environment, is the mistakes by programmers in assembler language managing register content information.

In previous post, i referred to including register use before set in the PLI program that i wrote in the early 70s to analyze assembler listings. this was because a frequent programmer "mistake" was in the management of register contents.

Later when I did dumprx
https://www.garlic.com/~lynn/submain.html#dumprx

recent reference:
https://www.garlic.com/~lynn/2006x.html#24 IBM sues make of Intel-base Mainframe clones

i did work on analyzing failure scenarios and building automated failure analysis scripts ... looking for particular types of failure signatures.

A large percentage of the failures turned out to be programmer mistakes in the management of register contents.

Now, one of the benefits supposedly claimed for higher level languages is the automation of register content management ... freeing the programming from having to do it (and also freeing the programmer from making mistakes when doing it).

My assertion is that the programmer burden (in typical C language environments) with regard to length management is analogous to the programmer burden in assembler language involved in managing register contents. Expert programmers can (usually) do a perfect job of managing the various length issues (and avoid creating buffer overflow vulnerabilities) ... just like expert programmers can (usually) do a perfect job of managing register contents in assembler language programs. However, my point is that the difficulty and burden involved in such management gives rise to similar types of programmer mistakes. The argument for eliminating assembler programmer burder (with regard to management of register contents) applies equally to the typical C programmer burden (with regard to management of lengths).

misc. past posts mentiong c language buffer overflow issues
https://www.garlic.com/~lynn/subintegrity.html#overflow

SSL security with server certificate compromised

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL security with server certificate compromised
Newsgroups: comp.security.misc
Date: Sat, 23 Dec 2006 14:23:59 -0700
"Gonzo" <gonzalo.diethelm@gmail.com> writes:
But this was the point of my question. You seem to be saying that, having the encrypted sniffs, you can brute-force them eventually, and I agree that this is (at least in theory) doable. What I am asking is: does having the server public AND private key help you in any way to decrypt the sniffed traffic? In other words, if I give you full sniffed traffic for a couple of requests / responses, and ask you to decrypt them, would it make any difference to you (faster, easier) if I also hand you the server's public / private keys?

the (symmetric) session key is generated by the client, encrypted with the server's public key and then transmitted to the server ... the server then is able to obtain the session key value by decrypting it with the server's private key (aka, given that you have access to the server's private key ... then you can also access all session keys for SSL sessions with that server). given that you have the server's private key ... you can decrypt the transmitted session key ... w/o having to resort to any brute force.

for little topic drift, rfc 4772 announced today ... "Security Implications of Using the Data Encryption Standard (DES)" which includes discussion on brute-force attacks ... ref
https://www.garlic.com/~lynn/aadsm26.htm#16 Security Implications of Using the Data Encryption Standard (DES)

for various drifts ... lots of past posts mentioning SSL server digital certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

and lots of past posts mentioning (general) exploits, fraud, vulnerabilities, and/or threats
https://www.garlic.com/~lynn/subintegrity.html#fraud

and lots of past posts discussing catch-22 for proposed improvements in the SSL server digital certificate infrastructure
https://www.garlic.com/~lynn/subpubkey.html#catch22

basically implications of proposals for validation of SSL server digital certificate applications which add digitally signatures and verifying the application digital signature by doing a real-time retrieval of public keys onfile with the domain name infrastructure .... aka basically a certificate-less infrastructure
https://www.garlic.com/~lynn/subpubkey.html#certless

which could then lead to everybody doing real-time retrieval of onfile public keys ... eliminating the requirement for any digital certificates. a certificate-less public key infrastructure proposal from old 1981 email:
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network
https://www.garlic.com/~lynn/2006w.html#15 more secure communication over the network
https://www.garlic.com/~lynn/2006w.html#18 more secure communication over the network

a recent thread with discussion of some other SSL server issues/vulnerabilities
https://www.garlic.com/~lynn/2006v.html#49 Patent buster for a method that increases password security
https://www.garlic.com/~lynn/2006v.html#51 Patent buster for a method that increases password security
https://www.garlic.com/~lynn/2006w.html#0 Patent buster for a method that increases password security
https://www.garlic.com/~lynn/2006w.html#4 Patent buster for a method that increases password security
https://www.garlic.com/~lynn/2006w.html#5 Patent buster for a method that increases password security




previous, next, index - home