List of Archived Posts

2003 Newsgroup Postings (12/12 - 12/31)

Weird new IBM created word
An entirely new proprietary hardware strategy
History of Computer Network Industry
Hyperthreading vs. SMP
Does OTP need authentication?
Hyperthreading vs. SMP
Does OTP need authentication?
IBM advertising artefact (what is it ?)
pointless embedded systems
virtual-machine theory
Secure web logins w random passwords
Order of Encryption and Authentication
Danger: Derrida at work
packetloss bad for sliding window protocol ?
64 bits vs non-coherent MPP was: Re: Itanium strikes again
packetloss bad for sliding window protocol ?
Star Trek, time travel, and such
Does OTP need authentication?
The BASIC Variations
socks & color
Dumb anti-MITM hacks / CAPTCHA application
Sun researchers: Computers do bad math ;)
1960s images of IBM 360 mainframes
1960s images of IBM 360 mainframes
Mainframe Training
Mainframe Training
Sun researchers: Computers do bad math ;)
The BASIC Variations
The BASIC Variations
Sun researchers: Computers do bad math ;)
Not A Survey Question
The BASIC Variations
Mainframe Emulation Solutions
[IBM-MAIN] NY Times editorial on white collar jobs going
The BASIC Variations
value of pi
Mainframe Training
The BASIC Variations
Mainframe Emulation Solutions
Mainframe Emulation Solutions
virtual-machine theory
comp.arch classic: the 10-bit byte
The BASIC Variations
Mainframe Emulation Solutions
Mainframe Emulation Solutions
Saturation Design Point
comp.arch classic: the 10-bit byte

pointless embedded systems

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pointless embedded systems
Newsgroups: alt.folklore.computers
Date: Fri, 12 Dec 2003 16:11:31 GMT
dgriffi writes:
So, is it a Hard Thing to design a washing-machine computer that is operated exactly the same as an old-fashioned clockwork one? Methinks not. A readout that indicates estimated finish time doesn't make sense. Isn't that sort of information supposed to be on the main control knob anyhow? If the metric doesn't correspond to minutes, it's not hard to get an idea of how long a wash ought to take.

I believe there was study of analogy/hands watch vis-a-vis a digital read-out watch. the claim was that the hands provided a lot more information content than the digital read-out. the hands not only told what the current time is ... but the position tended to convey how long since ... and how long to; with a digital watch you got what the current time is ... but were forced to calculate how long since and how long to.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

An entirely new proprietary hardware strategy

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: An entirely new proprietary hardware strategy
Newsgroups: comp.arch
Date: Fri, 12 Dec 2003 16:29:35 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
It would be sheer lunacy. SGI are the leaders in large-scale cache coherence, probably followed by Sun, HP and IBM, and even SGI would gasp at being asked to tackle that problem. Wave a large enough cheque and I am sure that something could be arranged ....

SCI defined distributed memory coherence ... there were implementations by (at least) Convex (bought by HP), Sequent (bought by IBM), and data general (EMC bought the dg disk array stuff ... the sci stuff ???). current web page:
http://www.scizzl.com/

SCI standardization (driven out of SLAC) was going on somewhat concurrently with FCS (driven by LLNL) and HiPPI (driven by LANL).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

History of Computer Network Industry

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of Computer Network Industry
Newsgroups: alt.folklore.computers
Date: Sat, 13 Dec 2003 00:22:07 GMT
Morten Reistad writes:
This is correct. It was first in 1983 the Internet became just that; an internet. But proper routers were still a long way off. Even in 1987 Cisco were peddling the GS series; not exactly a sound construction. It was first with the 4000 and 7000 that they had products that were ready for serious production use.

It was amazing that Sun, HP and others let them keep that market for so long. But I tried to convince Sun to get their ass moving in the general direction of the Internet; and talked for deaf ears.


can you imagine the difficulty inside IBM? ... trying to convince people like the guy that did APPN ... that he was wasting his time trying to add networking to SNA (the sna group even non-concurred with APPN announcement ... and so the announcement had to be escalated and then rewritten to make sure there was no mention of any connection between APPN and networking with SNA) ... instead come to work on tcp/ip

a little thread drift ... archeological SUN story
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?

numerous renditions of RFC1044 implementation story:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/96.html#14 mainframe tcp/ip
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#17 middle layer
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#50 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2001d.html#63 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#65 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#52 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
https://www.garlic.com/~lynn/2002k.html#31 general networking is: DEC eNet: was Vnet : Unbelievable
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002o.html#51 E-mail from the OS-390 ????
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003c.html#79 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#35 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003i.html#43 A Dark Day
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003n.html#40 Cray to commercialize Red Storm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Hyperthreading vs. SMP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hyperthreading vs. SMP
Newsgroups: linux.redhat
Date: Sat, 13 Dec 2003 02:15:55 GMT
wb writes:
How is memory contention (cache coherence) maintained with these hyperthreading machines ? Does it require an external memory agent ? In a SMP or NUMA, a memory controller (MIP's called it an agent ) ensured that memory integrity was kept. Can each virtual process instance be making memory updates ?

hyperthreading just uses more than one instruction stream, typically in an already superscaler processor ... sharing the same cache.

the superscaler processor has multiple instructions in flight already ... one of the purposes of superscaler is to compensate for cache misses ... other instructions can proceed in parallel when one instruction is stalled because of cache miss. The superscaler processor may also have speculative execution when conditional branches are encounterd .... i.e. assume that the direction of the branch is to go one way ... and if it turns out not to ... back-out all the instructions executed on the wrong path.

one of the first such efforts was a dual i-stream design for the 370/195 (some 30 years ago). 195 had 64 instruction pipeline ... but w/o support for speculative executions ... so branches in the instruction stream drained the pipeline. except for some specialized codes, the 195s tended to run at half (or less) of theoritical thruput because of the large number of conditional branches commonly found in standard codes. the dual i-stream project defined two instruction streams, a duplicate set of registers and a red/black bit flag tagging each operation in the pipeline (indicating which instruction stream the operation was associated with).

hyperthreading, in principle supports more than one instruction stream concurrently within the context of already complex superscaler context ... using common processor cache.

in such configurations ... two or more physical/logical processors (instruction streams) sharing the same cache won't have a cache consistency problem (although they may have some serialization issues). it is when there are multiple caches that the issue of memory/cache consistency arises.

A possibly configuration is four physical processors with two physical caches (where each physical cache supports two physical processors). There are cache consistency issues involved in coordination between the two caches (it is not between the four processors but between the caches). If you add hyperthreading to each of the four physical processors (say it now appears as eight logical instruction streams), that change is possibly totally transparent to the cache operation and the coherency operation between the two caches.

serialization of processors is typically done with automatic operations like compare&swap ... but that is somewhat orthogonal to the coherency implementation between caches (which can be totally independent of the number/kind of instruction streams supported).

recent posting from somewhat related thread in comp.arch
https://www.garlic.com/~lynn/2003p.html#1 An entirely new proprietrary hardware strategy

misc. past threads mentioning 370/195 dual i-stream work from 30 years ago:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?

lots of past mentions of compare and swap (again from 30 some years ago):
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#14 S/360 addressing
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#73 CS instruction, when introducted ?
https://www.garlic.com/~lynn/2001f.html#41 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#61 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#69 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#70 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#73 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#74 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001f.html#76 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001g.html#4 Extended memory error recovery
https://www.garlic.com/~lynn/2001g.html#8 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001g.html#9 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2001k.html#65 SMP idea for the future
https://www.garlic.com/~lynn/2001k.html#67 SMP idea for the future
https://www.garlic.com/~lynn/2001n.html#42 Cache coherence [was Re: IBM POWER4 ...]
https://www.garlic.com/~lynn/2001n.html#43 IBM 1800
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002f.html#13 Hardware glitches, designed in and otherwise
https://www.garlic.com/~lynn/2002h.html#45 Future architecture [was Re: Future micro-architecture: ]
https://www.garlic.com/~lynn/2002l.html#58 Spin Loop?
https://www.garlic.com/~lynn/2002l.html#59 Spin Loop?
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2003.html#12 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003.html#18 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003b.html#20 Card Columns
https://www.garlic.com/~lynn/2003c.html#75 The relational model and relational algebra - why did SQL become the industry standard?
https://www.garlic.com/~lynn/2003c.html#78 The relational model and relational algebra - why did SQL become the industry standard?
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
https://www.garlic.com/~lynn/2003e.html#67 The Pentium 4 - RIP?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
https://www.garlic.com/~lynn/2003h.html#5 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003h.html#19 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#20 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2003o.html#32 who invented the "popup" ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Does OTP need authentication?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does OTP need authentication?
Newsgroups: sci.crypt,sci.crypt.random-numbers
Date: Sat, 13 Dec 2003 16:13:15 GMT
"John E. Hadstate" writes:
Authentication has two different meanings:

(1) the message was sent by the person or organization that claims to have sent it, and

(2) the message hasn't been altered.


from security pain taxonomy; privacy, authentication, integrity, non-repudiation.

authentication taxonomy

cryptographic techniques can implement checks for both authentication and integrity (aka digital signatures).

encryption may implement both privacy and authentication ... but straight encryption by itself might not implement integrity. redundant information like (possibly encryption of) secure hash of the plaintext can implement integrity check.

some sort of redundant information like parity, error correcting code, or hash can be appended to the end of the plain text message and the whole thing encrypted with a shared-secret (only known by sender and receiver) ... achieving privacy, authentication, and integrity.

the message might be encrypted with shared-secret (achieving privacy and authentication) and then the encrypted message digitally signed (achieving authentication and integrity).

the plain text message can be digitally signed (achieving authentication and integrity) and then the combined plaintext and digital signature is encrypted with shared-secret (achieving privacy and authentication).

doing both encryption and digital signature achieves both privacy and integrity ... while effectively providing duplicate authentication.

note that sometimes checking the integrity of the message is referred to as authentication of the message (integrity) ... which creates some semantic abiguity regarding the word authentication (whether it refers to the message integrity or the sending entity).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Hyperthreading vs. SMP

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hyperthreading vs. SMP
Newsgroups: linux.redhat
Date: Sat, 13 Dec 2003 15:51:47 GMT
"Robert M. Riches Jr" writes:
Well said, except for one minor point of terminology. At least in my 17 year experience in microprocessor design, superscaler basically meant having multiple parallel execution units, so that multiple instructions could be sent to execution in the same clock cycle--lowering the best-case CPI below unity. It is out-of-order execution (or "dynamic execution" per Pentium Pro marketing literature) that allows instructions to proceed when an earlier instruction is stalled by a cache miss.

yes, slightly glossed over/sloppy to equate out of order and superscaler. you can have bookkeeping for multiple instructions in-flight supporting out-of-order execution ... in support of latency compensation associated with instruction stalling (because of cache misses). superscaler already has bookkeeping for multiple instructions in-flight ... so the incremental complexity for out-of-order and multiple threads tends to be much smaller.

non-out-of-order, but purely threaded example is tera (bought cray and now goes by the cray name) .... with a high-speed computer with possibly just about every instruction stalling and implements a large number of instruction streams (threads). mta-2 with up to 128 hardware threads:
http://www.cray.com/products/systems/mta/

from above:
Multi-threading Each MTA processor has up to 128 RISC-like hardware threads. Each thread is a hardware stream with its own instruction counter, register set, stream status word and target and trap registers. A different hardware stream is activated every clock period. This fundamental hardware innovation provides scalable memory latency tolerance.

High-Bandwidth Interconnect An extremely high bandwidth interconnection network lets each processor access arbitrary locations in uniform shared memory at up to 2.5 gigabytes per second. About 25 active streams per MTA processor are needed to overlap all memory latency with computational processing.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Does OTP need authentication?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does OTP need authentication?
Newsgroups: sci.crypt,sci.crypt.random-numbers
Date: Sun, 14 Dec 2003 17:32:01 GMT
"Douglas A. Gwyn" writes:
I haven't yet checked all the other replies, but one thjing that can go wrong is that the message (header) indicates the portion of the pad to use, the attacker blocks delivery of a message (so that portion of the OTP is not destroyed by the intended recipient) and keeps a copy, then sends the copy (spoof) later at a time when the old plaintext should no longer be accepted by the receiver. (This is called a replay attack.) E.g., think of a stock transaction: "buy 100 shares of IBM stock", which may be a bad thing to do once IBM shares start a downswing. With proper authentication, the attacker's spoof message will not be accepted as coming from the original sender.

or man-in-the-middle (MITM) attack ... not traditional replay attack where the attacker repeats a message that had been previously been sent/received.

traditional replay attacks are accepted as authenticated coming from the valid sender ... because they are a repeat of a message that in fact came from a valid sender.

the issue on a delayed message is it a validly delayed message ... possibly because of intermediate communication glitches (like email where some server has been down for a period of time) or a MITM attack ... aka purposefully delayed to take advantage of characteristic of some high level business process (in your example).

traditional replay attack (same message received more than once) countermeasure, either has some unique identifier for each message (and the recepient does something like keeping a log) or there is protocol chatter where the recepient provides a unique challenge as part of the initialization. Non-real-time based protocols (like email) will tend to use unique sender originated value ... while real-time protocols might tend towards protocol chatter initialization with recepient doing something more like challenge/response.

something like delay sensitivity in the higher level business processes might require other kinds of countermeasures ... i.e. given that the higher level business processes may be sensitive to delays ... then they might have to have delay recognition ability ... because the infrastructure can be susceptible to other types of delay resulting failures (not just attacker inititiated delays).

to some extent the message integrity issue is similar. transmission level protocols tend have various kinds of redundant information with regard to message integrity and transmission errors. an attacker may try to attack the message integrity in such a way that it is not caught by the transmission error process(es).

this is one of the places that end-to-end shows up as a basic security principle .... aka non-end-to-end solutions tend to provide cracks in the infrastructure which give rise to infrastructure vulnerabilities. These vulnerabilities can just be plain systemic failure issues (intermediate email server being out of service for some period of time) or purposefully introduced by attacks (MITM attack delaying message until after some specific event). Another is MITM corruption of message integrity at some sort of intermediate node that is not caught by the transmission based message integrity services.

previous posting in this thead regarding taxonomy of authentication
https://www.garlic.com/~lynn/2003p.html#4 Does OTP need authentication?

misc. past posts re: replay
https://www.garlic.com/~lynn/aadsm9.htm#3dvulner5 3D Secure Vulnerabilities?
https://www.garlic.com/~lynn/aadsm12.htm#6 NEWS: 3D-Secure and Passport
https://www.garlic.com/~lynn/aadsm13.htm#27 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm13.htm#28 How effective is open source crypto? (addenda)
https://www.garlic.com/~lynn/aadsm13.htm#29 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm13.htm#31 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm14.htm#30 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/2001d.html#20 What is PKI?
https://www.garlic.com/~lynn/2002m.html#14 fingerprint authentication
https://www.garlic.com/~lynn/2003g.html#70 Simple resource protection with public keys
https://www.garlic.com/~lynn/2003j.html#25 Idea for secure login
https://www.garlic.com/~lynn/2003m.html#50 public key vs passwd authentication?

misc. past posts re: MITM
https://www.garlic.com/~lynn/aepay10.htm#84 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay11.htm#37 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aepay12.htm#36 DNS, yet again
https://www.garlic.com/~lynn/aadsm13.htm#35 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm14.htm#1 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm14.htm#2 Who's afraid of Mallory Wolf? (addenda)
https://www.garlic.com/~lynn/aadsm14.htm#3 Armoring websites
https://www.garlic.com/~lynn/aadsm14.htm#4 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm14.htm#5 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm14.htm#9 "Marginot Web" (SSL, payments, etc)
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm14.htm#43 PKI "not working"
https://www.garlic.com/~lynn/aadsm15.htm#26 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm15.htm#27 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm15.htm#28 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm15.htm#29 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#41 Solutions to Man in the Middle attacks?
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002d.html#50 SSL MITM Attacks
https://www.garlic.com/~lynn/2002j.html#38 MITM solved by AES/CFB - am I missing something?!
https://www.garlic.com/~lynn/2002j.html#58 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002k.html#11 Serious vulnerablity in several common SSL implementations?
https://www.garlic.com/~lynn/2002k.html#51 SSL Beginner's Question
https://www.garlic.com/~lynn/2002l.html#5 What good is RSA when using passwords ?
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003f.html#25 New RFC 3514 addresses malicious network traffic
https://www.garlic.com/~lynn/2003g.html#38 What is Meet In The Middle Attack
https://www.garlic.com/~lynn/2003h.html#23 Authentication protocol
https://www.garlic.com/~lynn/2003j.html#25 Idea for secure login
https://www.garlic.com/~lynn/2003l.html#6 The Original Interlock Protocol (what is...)
https://www.garlic.com/~lynn/2003l.html#36 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003m.html#50 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#10 Cracking SSL
https://www.garlic.com/~lynn/2003n.html#30 Is this right? Question about SSL and PKI
https://www.garlic.com/~lynn/2003o.html#3 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#4 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#8 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#22 securID weakness

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

IBM advertising artefact (what is it ?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM advertising artefact (what is it ?)
Newsgroups: alt.folklore.computers
Date: Mon, 15 Dec 2003 01:01:17 GMT
Manuel Viet writes:
I was given recently a somewhat curious object (sorry, i've no picture to show, so i'll describe it the best i can). It's a metalic cylinder, about 7 cm tall, glued to a black plastic base labeled IBM. What's odd is that the cylinder bears lines of figures. Obviously, it's destination is to hold pencils. But the person who gave me this one said that he had another one where, instead of the lines of figures, it's the frame of an invoice which is embossed on the cylinder.

After turning this upside down, I came to the conclusion that in fact those cyclinders were made to print on punch cards before they were used.


i've seen quite a few of them over the years ... pen/pencil holder with engraved image of lines of punch card. I believe that they were just made that way for pen/pencil holders .... (but what do i know)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

pointless embedded systems

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pointless embedded systems
Newsgroups: alt.folklore.computers
Date: Mon, 15 Dec 2003 14:04:46 GMT
Larry Elmore <ljelmore_@_comcast_._net> writes:
Alaska can definitely get colder than that. I've lived there, and been out when it was below -40C. Same in Montana. There was a 10 or 11 day stretch in 1988/89 when the low was below -40 every night. Of course, at those temperatures, almost no engine will start unless you've got a block heater plugged in, anyway.

one year that i was at the university ... it had one night distinction of the coldest for the nation one night (-52f) which was somewhat unusual for the area. the high the next day -27f. as a temporary measure, you run a 60 watt light bulb next to the engine. at -52f ... supposedly you can throw a glass of water into the air and it freezes(?) before it hits the ground.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

virtual-machine theory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual-machine theory
Newsgroups: bit.listserv.ibm-main
Date: Mon, 15 Dec 2003 14:23:13 GMT
gilmap writes:
You (and several other recent participants in this thread) are confusing IBM's VM(tm) product with the subject of Virtual Machine Theory. Remember, there's a legitimate discipline of Computer Science beyond the study of IBM's hardware and software.

Concerning the remarks about the effects of the various DIAGs, on a realistic Virtual Machine (not (tm)), DIAG should have the same effect as perceived by the program as on the real hardware.


long ago and far way ... as an undergraduate ... in cp/67, i faked a disk CCW to do seek, search, tic, read/write operation as CC=1, csw stored; i.e. it was a synchronous disk operation for cms.

bob adair (at the science center) made a big point that i was violating the virtual machine architecture ... that all violations of the virtual machine architecture should be done with a diagnose instruction ... since the diagnose instruction is defined as model dependent. misc. science center refs:
https://www.garlic.com/~lynn/subtopic.html#545tech

the "theory" was that you could have a 360/vm model i.e. instead of 360/67 ... it would be 360/vm ... aka a valid model of 360 that conformed to everything in the pricinple of operations for true 360 operation ... but was allowed to define the implementation of the diagnose instruction according to the claim that it was a virtual machine model of the 360 (as opposed to a 360 model 30, or a model 40, or a model 65, etc) .... aka each model of the 360 was allowed to define what the diagnose instruction did on that model (i.e. definition of what the diagnose instruction did was model dependent).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Secure web logins w random passwords

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure web logins w random passwords
Newsgroups: alt.security
Date: Tue, 16 Dec 2003 03:09:56 GMT
lkandia@seerx.com (Luke Kandia) writes:
I am looking for a vendor for a smartcard login system. This would be used in conjunction with an SSL website, to thwart keystroke logging and trojan software from capturing passwords that could be re-used.

The device is credit card sized with a keypad and LCD display. When you login you are asked to enter a series of digits (or alphanumerics) into your card. The card then processes the input using a built in algorithm to generate a password on the LCD readout. This password is only valid for one login.

I have seen such a device in action a few years back utilized as a means of accessing a firewall product called Borderware. It appeared to work well. I would greatly appreciate any news on where I could track down such a device, that being the software and hardware.


there is a calculator looking device that is used in europe for online & telephone banking ... that is challenge/response aka it really isn't a one time password ... it is a challenge/response system (i.e. you get a challenge ... and using some shared-secret, generate a correct response).

some discussion of such a device:
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002l.html#17 What is microcode?
https://www.garlic.com/~lynn/2003e.html#37 Keeping old hardware alive?
https://www.garlic.com/~lynn/2003o.html#3 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#8 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#9 Bank security question (newbie question)

note the three posts in the above on "bank security question" discuss challenge/response in the context of not having any sort of SSL or any other countermeasure for man-in-the-middle attacks.

there is an internet standard for something called one-time-password which is not a challenge/response system .... the following post has much more detailed discussion on OTP
https://www.garlic.com/~lynn/2003o.html#46
and pointers to several posts on a straight forward evesdropping/MITM attack against OTP ... note that OTP is justified as being solution for passwords in the presence of evesdropping; if you preclude evesdropping, then it also be precluding the basic justification for having OTP (as opposed to some other solution).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Order of Encryption and Authentication

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Order of Encryption and Authentication
Newsgroups: sci.crypt
Date: Tue, 16 Dec 2003 03:23:56 GMT
nobody writes:
Referring to the paper at:


http://palms.ee.princeton.edu/fiskiran/repository/CRYPTO/krawcyzk01order.pdf

This is a discussion of the order of encryption and message authentication; the conclusion presented by the author is that encryption should be done before authentication. That is, encrypt the message then send the ciphertext + mac of the ciphertext:


within the context of security taxonomy PAIN:
• privacy
• authentication
• integrity
non-repudiation

... privacy is encryption, authentication is authentication of the sender (not authentication of the message) and integrity is the message wasn't modified (i.e. validating its integrity).

things like MAC imply "authentication" in the context of validating the integrity of the message ... and not authentication of the sender of the message.

digital signature combines authentication (of the sender) and (validating of) integrity into a single operation i.e. extract SHA-1 from the digital signature using the sender's public key ... and then check the extracted SHA-1 against the currently calculated SHA-1 for the message ... if they match then it was both transmitted by the appropriate sender AND wasn't modified in transit.

it can create some amount of confusion if the word authentication is used for connoting both authentication of the message and authentication of the sender.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Danger: Derrida at work

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Danger: Derrida at work
Newsgroups: comp.programming,alt.folklore.computers,comp.software-eng
Date: Tue, 16 Dec 2003 14:40:16 GMT
"J.Clarke" writes:
FWIW, I took an abstract algebra course in 1974 that was taught by a very sharp lady with a PhD that she had held for more than 20 years. She was not the only female professor in that math department. Later on, in 1977, I took a graduate level complex variables course from another very sharp (also very pretty and very pregnant) woman who was a good deal younger and had thus held her PhD for a shorter time.

Still, comparatively speaking, women in math and the sciences were a distinct minority at that time. Later on in industry the company I worked for had just recently hired their first group of female engineers--they weren't the first female engineers that the company had hired--there had been one earlier but she had long since retired--but they were the first that had been hired in quantity. The old guys weren't quite sure what to make of them.


mid-90s, we were doing some consulting for the census department getting ready for 2000. they were going to have an review/audit by some gov. person that specialized in very large databases ... so we were brought in to handle it all on the census side.

sort of towards the end of the day during a coffee break ... there was some kibitzing about backgrounds and he mentioned something about graduating from u.of.mich, my wife said so did she, and she asked what department? The reply was engineering graduate school, and my wife said so was she. My wife asked what years ... and he replied. my wife said so was she, and that she was the only femaile in the engineering graduate school at that time. He replied, no she wasn't, somebody else was. My wife said that was her. He looked at her and said you've gotten older ... so my wife said so had he.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

packetloss bad for sliding window protocol ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: packetloss bad for sliding window protocol ?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 17 Dec 2003 04:21:06 GMT
robertwessel2@yahoo.com (Robert Wessel) writes:
This description is misleading at best. In a sliding window protocol the acks flow back to the sender in parallel to the data packets. So long as the ack rate keeps up with the data packet rate (and there's no packet loss), the sender will never have to wait, except as forced by link throughput considerations.

note that there was a paper in sigcomm circa 1988 about the same time as slow-start publication. i believe the paper showed that slow-start would never stabalize in real-world internet because acks tended to bunch and several arrive at one time. one of the congestion problems supposedly that window algorithm is supposed to address is characterized by back-to-back packet arrival at intermediate nodes.

supposedly the hope has been that windowing algorithm would achieve homogeneous even distribution of ack arrivals back to the sender ... which then would result in an even distribution of packets being sent. one of the problems is that packet transmission and ack returns tend to exhibit totally different transit characteristics.

in theory, the intermediate nodes would like to control the inter-packet arrival interval ... which is somewhat related to packet transmission rate ... but is really directly related to inter-packet transmission interval.

because of ACK bunching characteristic in real world networks .... they are almost totally useless for controlling inter-packet transmission interval. ACKs are several levels of indirection away from inter-packet transmission interval.

One of the places for acks & windows were intended was low level link level control where the receiving side set the worst case resource availability for purposes of the sender. The sender would never attempt to transmit more than N packets w/o an ack ... because the receiving side had at most N buffers reserved for packet reception for that link.

One possible scenario that ack/window has trouble addressing is an intermediate node that is multiplexing the same sent of buffers among multiple senders. That sharing of buffers could be such that there is less than a full single buffer per sender .... which would require a fraction of a single packet sliding window (even when an ACK came back a delay in transmission is still required before the next packet should go out).

Another view, the final destination has most of the control over deciding when to transmit an ACK; ack/windows is somewhat related to world of directly connected senders and receivers, which is not very represented of the current store&forward networking infrastructure.

the claim is in non-stable ack environment (real world networks) ... that the only really effective way to control inter-packet transmission interval (related to things like congestion control) is to directly control the inter-packet transmission interval.

a couple past threads regarding direct control of inter-packet transmission interval:
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003k.html#57 Window field in TCP header goes small

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

64 bits vs non-coherent MPP was: Re: Itanium strikes again

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 64 bits vs non-coherent MPP was: Re: Itanium strikes again
Newsgroups: comp.arch
Date: Thu, 18 Dec 2003 14:39:37 GMT
Robert Myers writes:
The applications of 30 years ago anticipated limited available memory and thus were broken up into small, carefully designed blocks called overlays that were designed to be swapped in and out.

If there is any corresponding practice in widespread use, I am unaware of it. No application developer will benefit particularly from worrying that more than one program might be occupying memory, and thus, they don't.


tss/360 didn't ... it assumed flat virtual memory and so laid out (what had been) traditionally real-storage compact implementations and relied on individual (4k) page faulting ... with little or no hints/assists above or below the interface (like no prefetching when recognized sequential reading). the synchronous 4k page faulting would drastically increase elapsed time ... compared to any paradigm that packaged somewhat for expected common real storage sizes and had much better transition between phases.

another type of problem was the porting of apl\360 to cms\apl. apl\360 nominally had small workspaces, typically of 16kbyte to 32kbytes which were effectively swapped in total. the apl\360 storage allocation was on every assignment to (re)allocate the next available (higher) storage location and mark any former locations garbage. When the allocation reached the top of workspace ... it would do garbage collection and compact all allocated storage to lowest possible addresses. That worked ok in the somewhat real-storage apl\360 environment ... but was disasterous in cms\apl environment where the workspace appeared to be all of the virtual address space (which was typically larger than real storage).

space/time memory access plots .... virtual storage addresses on the vertical, time on the horizontal ... printed on backside of green-bar wide paper in six foot lengths and assembled along corridor wall ... showed a saw-tooth effect of original apl\360 port to virtual memory ... storage location use showed a very rapid rise to the top of virtual memory and then a solid line dropping back down (garbage collection) followed again via a very rapid rise.

about the same time that the apl\360 port was going on a virtual memory use analysis tool was being developed ... which was used for generating the mentioned access plots. The tool was used in helping validate a rewrite of the apl storage management algorithm for operation in virtual memory environments.

features developed in the tool eventually supported things like giving it a storage location load map of application running in virtual memory ... where it would attempt to optimally re-order individual module placement in the application to minimize page faults. after a couple years, this was released as a product in 1976. For relatively decent modularized large applications ... it could show some decent improvement (scenarios where the total number of distinct virtual pages needed by the application typically exceeded available real storage) ... aka it tended to produce more compact (but more tightly bound) working sets.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

packetloss bad for sliding window protocol ?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: packetloss bad for sliding window protocol ?
Newsgroups: comp.protocols.tcp-ip
Date: Thu, 18 Dec 2003 16:29:42 GMT
noselasd@frisurf.no (Nils O. Selåsdal) writes:
The real world does not work like that. ;-)

Please also read RFC 2960 , a rather nice and robust protocol.


the latest RFC in the genre is 3649 a couple days ago.
3649 E
HighSpeed TCP for Large Congestion Windows, Floyd S., 2003/12/12 (34pp) (.txt=79801) (was draft-ietf-tsvwg-highspeed-01.txt)


as always ... you can go to
https://www.garlic.com/~lynn/rfcietff.htm

the lower frame is currently 3601-3900 so it is possible to scroll the lower frame until the above entry appears. as always, clicking on the ".txt=" field retrieves the actual RFC.

also in the main frame, it is possible to select Term (term->RFC#) from the RFCs listed by section and scroll down to "congestion", i.e.
congestion
see also performance
3649 3540 3522 3520 3517 3496 3477 3476 3474 3473 3468 3465 3451 3450 3448 3436 3390 3309 3210 3209 3182 3181 3175 3168 3159 3124 3097 3042 2997 2996 2988 2961 2960 2914 2889 2884 2872 2861 2816 2814 2753 2752 2751 2750 2749 2747 2746 2745 2582 2581 2556 2490 2481 2414 2382 2380 2379 2309 2210 2209 2208 2207 2206 2205 2140 2098 2001 1859 1372 1254 1110 1106 1080 1018 1016 896 813 449 442 210 59 19


clicking on any RFC number brings up that RFC summary in the lower frame, clicking on the ".txt=" field retrieves the actual RFC.

It is also possible to scroll to entry for "Transport Area" (i.e. RFCs put out by the Transport Area working group).
Transport Area
see also congestion
3649 3522 3517 3448 3436 3390 3309 3168 3042 2988 2414


a couple from above:
3448 PS
TCP Friendly Rate Control (TFRC): Protocol Specification, Floyd S., Handley M., Padhye J., Widmer J., 2003/01/27 (24pp) (.txt=52657)
3390 PS
Increasing TCP's Initial Window, Allman M., Floyd S., Partridge C., 2002/09/31 (15pp) (.txt=36177) (Obsoletes 2414) (Updates 2581)
3168 PS
The Addition of Explicit Congestion Notification (ECN) to IP, Black D., Floyd S., Ramakrishnan K., 2001/09/14 (63pp) (.txt=170966) (Obsoletes 2481) (Updates 793, 2401, 2474)


random past rate-based related threads:
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/2002p.html#31 Western Union data communications?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003.html#59 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#64 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#19 tcp time out for idle sessions
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003p.html#13 packetloss bad for sliding window protocol?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Star Trek, time travel, and such

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Star Trek, time travel, and such
Newsgroups: alt.folklore.computers
Date: Fri, 19 Dec 2003 16:02:47 GMT
hawk@slytherin.ds.psu.edu (Dr. Richard E. Hawkins) writes:
I've encountered very few worthwhile time travel tails. Most suffer from serious design problems, in which there's nothing really at stake, as another try coculd start a second earlier, or that if the "timeline" can be changed--but with some people from the other still present--*both* timelines probably still exist, and the problem isn't solved, and so forth. In many tales, ther is inexplicable time pressure at both spots in time, even though people can jump freely . . .

_Guns of the South_ dealt with it by having a fixed jump in the machine, which kept actions in the two timelines in synch (and was really just a way of creating the alternate history anyway). A novella I read had a time traveller researching the Odin mythology, and actually causing it, which was interesting. And a couple of others, but *very* few.


... any reasonably unrestricted precision time travel ... sort of implies that you could have an infinite number of tries to get it the way you want it to ... most plots have people achieving things thru some large number of interactions ... unrestricted precision time travel could just about do away with story plots.

the other technology is the transporter. i've repeatedly claim that the scanning technology is significantly simpler than the decomposition and recomposition technology ... and therefor scanning technology is likely to have been available scores of years earlier. an amplication of such scanning technology is just have people sit in near stasis in some sort of concoon with the scanning technology being used. you do away with the bridge, talking, verbal interactions, commands, etc. sort of a variation on the matrix ... but not needing the late '90s virtual reality. again, little or no plot & action that people in current society are likely to relate to.

misc refs to past transporter threads (it is deja vu, we are having the same conversation)
https://www.garlic.com/~lynn/2002q.html#6 Sci Fi again was: THIS WEEKEND: VINTAGE
https://www.garlic.com/~lynn/2002q.html#8 Sci Fi again was: THIS WEEKEND: VINTAGE

misc refs: to past time machine/travel threads
https://www.garlic.com/~lynn/97.html#1 Year 2000: Why?
https://www.garlic.com/~lynn/2002n.html#55 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#41 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#42 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#45 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#46 ibm time machine in new york times?
https://www.garlic.com/~lynn/2003b.html#3 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#20 difference between itanium and alpha
https://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Does OTP need authentication?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does OTP need authentication?
Newsgroups: sci.crypt
Date: Fri, 19 Dec 2003 16:44:20 GMT
Michael Amling writes:
What term, if not message authentication, do we use for the process of verifying that some message is actually from Aunt Maude? If we know Aunt Maude's public key and we know that Aunt Maude is careful with her private key, and that ... a few other things ..., we can verify messages as coming from Aunt Maude. A good definition here would help us to focus on those aspects that make message authentication (as I see it) possible. When the MAC key or public key is found floating in a bird bath, there's no way to know that anyone has been careful with anything, it's impossible to verify that messages came from anyone or anywhere, and hence there can hardly be said to be "data origin authentication".

a security taxonomy is pain

• privacy (aka encryption)
• authentication
• integrity
non-repudiation

an authentication taxonomy is (one or more) 3-factor

something you have
something you know
something you are

authentication doesn't require identification .... aka the stereotype swiss bank account or the torn dollar bill sometimes seen on TV programs with respect to something like anonymous informat line or anybody from a collection of people that happens to know the correct passphrase (Klinger walking patrol on MASH).

one of the claims about x.509 identity digital certificates from the early 90s never catching on was the enormous privacy and liability problems with having publicly presented identity information. some number of the PKI trials in the mid-90s had gone to relying-party-only certificates ... containing possibly no more than some account number and a public key (although still possibly 4k bytes in size or larger) ... just to get around the enormous privacly and liability problems associated with identity certificates.
https://www.garlic.com/~lynn/subpubkey.html#rpo

non-repudiation can start to get into the area of not only prooving who originated it ... but possibly also prooving that they intended to originate it and also possibly that they are in agreement with the T&Cs outlined in the contents ... aka a legal signature ... as opposed to a digital signature (sometimes some semantic ambiguity because both terms contain the word signature).

typically a digital signature is an integrity and authentication technology ... but not necessarily an identification ... aka can infer one, two (or possily three) factor authentication from the digital signature; the entity originating the signature demonstrated something you have (say hardware token) and/or something you know (pin/password).

there are also sometimes confusion over "auth" systems. "auth" can refer to either authentication and/or authorization ... even though they are very distinct business processes. however, sometimes both kinds of "auth" business processes are collapsed into the same system ... where authentication is a pre-requisite to authorization.

some past threads regarding authentication vis-a-vis identification:
https://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
https://www.garlic.com/~lynn/aepay11.htm#58 PKI's not working
https://www.garlic.com/~lynn/aepay11.htm#60 PKI's not working
https://www.garlic.com/~lynn/aepay11.htm#66 Confusing Authentication and Identiification?
https://www.garlic.com/~lynn/aepay11.htm#68 Confusing Authentication and Identiification?
https://www.garlic.com/~lynn/aepay11.htm#72 Account Numbers. Was: Confusing Authentication and Identiification? (addenda)
https://www.garlic.com/~lynn/aepay11.htm#73 Account Numbers. Was: Confusing Authentication and Identiification? (addenda)
https://www.garlic.com/~lynn/aepay12.htm#0 Four Corner model. Was: Confusing Authentication and Identification? (addenda)
https://www.garlic.com/~lynn/aepay12.htm#1 Confusing business process, payment, authentication and identification
https://www.garlic.com/~lynn/aepay12.htm#2 Confusing business process, payment, authentication and identification
https://www.garlic.com/~lynn/aepay12.htm#3 Confusing business process, payment, authentication and identification
https://www.garlic.com/~lynn/aepay12.htm#4 Confusing business process, payment, authentication and identification

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The BASIC Variations

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: comp.lang.basic.misc,alt.lang.basic,alt.folklore.computers
Date: Sat, 20 Dec 2003 14:55:33 GMT
Nick Spalding writes:
When I joined IBM as a CE in 1963 I heard of but never saw such a booklet. The cynical view in the CE community was that most of the now forbidden practices were the personal invention of old man Watson.

my impression was that the booklet was somehow associated with gov. litigation ... the corporation has to show that it is taking steps to educate/remind employees that such activities are not tolerated. while the gov. may sue a company for engaging in various practices .... a company is an abstraction ... the abstraction doesn't actually do anything ... it is the members of the corporation that do things. If an individual does something ... it isn't sufficient that the corpoation prooves that it never instructed the individual to do it ... it must also proove that it specifically told the individual not to do it.

Corporations are now also being held responsible for other kinds of individual behavior ... and as a result corporations need to proove that they have taken steps to instruct individuals that such behavior is not tolerated.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

socks & color

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: socks & color
Newsgroups: alt.folklore.computers
Date: Sat, 20 Dec 2003 17:08:34 GMT
hawk@slytherin.ds.psu.edu (Dr. Richard E. Hawkins) writes:
I have yet to see anything but the occasiona grey that's long enough to be comfortable in boots--the top of the boots, particualrly the handles, rub funny against bare leg.

two dozen white crew length ... and a dozen white calf length. most boots aren't too bad, but i've had one pair recently that seem to use particularly stiff plastic thread for attaching the boot pulls.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Dumb anti-MITM hacks / CAPTCHA application

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Dumb anti-MITM hacks / CAPTCHA application
Newsgroups: sci.crypt
Date: Sun, 21 Dec 2003 15:17:08 GMT
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
Alice and Bob, two random strangers, discover each other through an online personals ad and want to have a secure phone conversation or online chat.

missing some pieces ... how does ivan know that bob is the person that alice discovered in the personals ad ... or how does ivan know that alice is the person that bob discovered in the personals ad (regardless of whether or not bob & alice trust ivan for anything).

much simpler to just include your public key in the personals ad ... then there is a direct relationship between the personals ad and the entity in the personals ad ... aka the people running the personals ad can attest to the fact that the person that took at the ad is the person that has the public key (with the caveat how does anybody trust personal ads in the first place).

this is the authentication scenario.

it is along the lines of various threat models against the whole SSL domain name server certification process ... which goes something like:

the ssl domain name server certification authority industry is somewhat backing a proposal that when people register a domain name, they also register a public key. currently when some asks for a ssl domain name server certificate ... they have to provide a bunch of identification information ... which the CA then tries to match against a bunch of identification registered with the domain name registration authority ... to make sure that the person requisting the certificate is the same as the person that owns the domain. this identity matching process can be a fairly expensive, time-consuming, and error prone process.

so the proposal is that when you register your domain name, you also register a public key. then when a CA gets a request for a SSL domain name certificate ... they just required that it be digitally signed; the CA then retrieves the associated public key from the domain name registration authority and performs a straight-forward authentication operation on the digital signature (as opposed to having to retrieve a bunch of identification information from the domain name registration authority and perform the expensive, time-consuming, and error-prone identity matching process).

such a solution does present something of a catch-22 for the SSL domain name certificate industry ... in theory if the certification authority can retrieve the correct domain name owner's public key from the domain name registration authority ... then so can everybody else in the world; meaning that if I can directly get the correct public key directly from the domain name registration authority .... why is a 3rd party domain name certificate needed to certify somebody's public key as correct.

in much the same way ... the person placing a personal ad can register their public key directly with the ad. If their public key is part of their ad ... then to the extent somebody is basing something on the ad ... then that person can be assured of talking to the entity that placed the ad.

lots of past discussions about the SSL domain name certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Sun researchers: Computers do bad math ;)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sun researchers: Computers do bad math ;)
Newsgroups: comp.arch
Date: Sun, 21 Dec 2003 18:06:34 GMT
"del cecchi" writes:
I don't think of a chip as a do or die scenario. And building the chip or protein will reveal errors. Just way fewer and easier to find ones. Here is an example for you programmer folks. Imagine writing one of those big programs, takes 30 or 40 people a year or two. million lines of code more or less. You can do whatever you want, but to actually compile and run it will cost 1 million USD. Oh and it takes 4 months to compile and load, probably 6 months total until you find out it works. If you want to change something it costs between .5 and 1 million dollars and takes 2 to 4 months, depending on exactly what the change is.

How would that change your development methodology and tools?


one place is anything human rated .... following is something I had dredged up from 20 years ago (posted by somebody else) with regard to dates (slightly related to a y2k thread) ... however, point three mentions a minor software fix:
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)

Date: 7 December 1984, 14:35:02 CST

3.We have an interesting calendar problem in Houston. The Shuttle Orbiter carries a box called an MTU (Master Timing Unit). The MTU gives yyyyddd for the date. That's ok, but it runs out to ddd=400 before it rolls over. Mainly to keep the ongoing orbit calculations smooth. Our simulator (hardware part) handles a date out to ddd=999. Our simulator (software part) handles a date out to ddd=399. What we need to do, I guess, is not ever have any 5-week long missions that start on New Year's Eve. I wrote a requirements change once to try to straighten this out, but chickened out when I started getting odd looks and snickers (and enormous cost estimates).


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

1960s images of IBM 360 mainframes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1960s images of IBM 360 mainframes
Newsgroups: alt.folklore.computers
Date: Mon, 22 Dec 2003 05:43:56 GMT
cjs2895@hotmail.com (cjs2895) writes:
I'm looking for large 1960s images of IBM 360 mainframes that I can use for my desktop background. Color or B/W. I've Googled for a couple hours and the best I've been able to find is some System 360 marketing materials that someone scanned (and even touched up!). Everything else I've come across tends to be small and low resolution.

you may have sparked something with the geocities url ... since it is saying that it has temporarily exceeded its transfer limit

are the newcastle ones are too small?
https://web.archive.org/web/20031121232747/www.cs.ncl.ac.uk/events/anniversaries/40th/webbook/photos/index.html
this seems to be reasonably large size:
https://web.archive.org/web/20051228182935/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide07.jpg

these are probably marketing material scans at columbia
http://www.columbia.edu/cu/computinghistory/datacell.html
http://www.columbia.edu/cu/computinghistory/2311.html

there are some misc. here:
http://www.nfrpartners.com/comphistory/

and a couple 360 front panel pictures here
http://www.punch-card.co.uk/toppage1.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

1960s images of IBM 360 mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1960s images of IBM 360 mainframes
Newsgroups: alt.folklore.computers
Date: Wed, 24 Dec 2003 02:34:00 GMT
Brian Inglis writes:
On 370 systems, the CICS dumps were a few MB of partition, and VSE or VM OS dumps were usually 16MB of memory in the same format. ISTR CICS (and probably VM) had a trace table as part of the dump (or somewhere in the dump), so you started with the last entry and worked your way back from there to find the cause of the dump, and then further back to the origin of the problem.

introduced in cp/67 as part of fast reboot ... was writing dump to disk ... instead of to the printer (and then automatigically rebooting) ... from van vleck's home page
https://www.multicians.org/thvv/
includes story about cp/67 crashing 27 times in same day:
https://www.multicians.org/thvv/360-67.html
the comparison with multics somewhat gave rise to the development of the "new storage system" ... see rest of story in above.

i had done the tty support for cp/67 when an undergraduate and ibm had shipped in product. I had implemented using one byte arithmetic under the assumption that tty line lengths were never more than 255 (actually never more than 80). In the above, I believe somebody had done something like a mod to cp/67 for supporting an ascii plotter or graphics device with long lines (1200bytes?) ... and the length of transferred data calculations got messed up.

in any case, when I was trying to do some fancy stuff with 2702 controller for tty & 2741 and found out it couldn't be done with 2702 somewhat led to project to build pcm controller to replace the 2702
https://www.garlic.com/~lynn/submain.html#360pcm
and getting blamed for helping start the ibm pcm controller industry

later when playing with rex (now called rexx) ... i made some assertion that i could write a replacement for the VM/370 dump reader (a large, all assembler application supported by an organization in endicott) in rex that ran ten times faster (rex was/is an interpreted language) with ten times more function in less than 3 months elapsed time ... only working on it only half time
https://www.garlic.com/~lynn/submain.html#dumprx

and for some addition drift, some old rexx postings:
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#30 20th March 2000
https://www.garlic.com/~lynn/2000b.html#31 20th March 2000
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2000b.html#33 20th March 2000
https://www.garlic.com/~lynn/2002g.html#57 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#58 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#59 Amiga Rexx
https://www.garlic.com/~lynn/2002g.html#60 Amiga Rexx

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Training

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Training
Newsgroups: bit.listserv.ibm-main
Date: Wed, 24 Dec 2003 04:26:52 GMT
charlie@ibm-main.elektro.cmhnet.org (Charlie Smith) writes:
Going back and reading this whole thread makes me just slowly shake my head - over the newcomers in this field that think IBM is going to change. Since I started working with their stuff almost 40 years ago, IBM has been a marketing driven company.

there was a side-jog during FS days .... which i somewhat snidely made the analogy to a cult movie that had been playing forever down in central sq about the inmates being in charge of the institution (ww2, allies enter a french town that had been vacated and was currently populated by inmates from the local asylum). with canceling of FS ... there was strong pendulum swing back from heavy technology side. some of my past references to FS
https://www.garlic.com/~lynn/submain.html#futuresys
a little bit here ... but mostly url for the above:
https://people.computing.clemson.edu/~mark/fs.html

note with regard to above, FS was way more than single-level store .... note that SLS had somewhat been implemented in tss/360 and did poorly.

specific extract from fs posting that has pointer to anoother reference/view on fs:
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS = IBM Future System

to some extent the extract in the above reference is to PCM controller stuff ... which i've been blamed for helping originate:
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Training

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Training
Newsgroups: bit.listserv.ibm-main
Date: Wed, 24 Dec 2003 04:48:33 GMT
Anne & Lynn Wheeler writes:

https://www.garlic.com/~lynn/2000f.html#17 [OT] FS = IBM Future System


oops that should have been
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS = IBM Future System

i.e. "16", not "17". it references and has extracts from a (PDF) paper "The rise and fall of IBM" which can be found at:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Sun researchers: Computers do bad math ;)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sun researchers: Computers do bad math ;)
Newsgroups: comp.arch
Date: Fri, 26 Dec 2003 18:01:33 GMT
Robert Myers writes:
When you are approaching problems you _don't_ understand and where you have no tools, you are sometimes reduced to random experimentation. When you have to make something work, and you know that Murphy's law rules, you use redundancy.

Numerical modeling of physical processes is not a field where we should be reduced to random experimentation.

What Jonah Thomas suggested has something in common with extreme programming, except that, in extreme programming, you gain synergy of insight and lose the presumed lack of correlation of random errors.


mayy or may not have anything to do with anything ... but when doing the resource manager ... after doing a whole lot of benchmarks and examining long term snapshot performance numbers from a large number of systems ... a parameterised synthetic workload, bunch of automated benchmarking processes, an analytical APL model, and a model of nominal operating envelopes (for observed environments and workloads) were developed.

then predefined something like 1000 benchmarks that sort of covered all points along the edges of the operational envelope ... large sampling of points within the operational envelope and a lot of points well outside any observed operational environment. the benchmarks were automated and turned loose ... with the measured results being fed into the apl model ... in part to help calibrate its operation. after the first 1000 benchmarks or so ... then another 1000 benchmarks were done with operational & workload points selected by the apl model ... looking for anomolies and/or other points of operational interest. the 2000 benchmark suite took something like 3 months elpased time to run (with some amount of checking and updating along the way).

misc. past posts about the benchmarking
https://www.garlic.com/~lynn/submain.html#bench
misc. past post about the resource manager & related stuff
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

the resource manager announcement letter
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The BASIC Variations

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: alt.folklore.computers
Date: Fri, 26 Dec 2003 21:15:08 GMT
"Kelli Halliburton" writes:
Actually, it means that the organization chart is WRONG and should be rewritten to reflect the _de facto_ organization of the company.

Companies should be allowed to evolve to a steady productive state, then the documentation should reflect that state, rather than forcing the company to conform to arbitrarily-imposed documentation.


not all organizations are hierarchical ... and/or not all aspectcs or operational characteristics of an organization are hierarchical.

sometimes org charts are for the people that need the comfort of an hierarchical organizaton ... and possible something of a test for people as to how an organization really operates.

one of boyd's observations was that a lot of US corporations organizational struture during at least the 70s and 80s (and probably material taught in MBA schools) were the result of young men that had been part of US military organization during WW2 ... and were then "coming of age".

The scenario was that entering WW2, US had very few experienced military personal and had to greatly expand the numbers quickly. The solution was to quickly deploy a large number of inexperienced people in an extremely regid, tightly controlled, top-down organization ... attempting to leverage the few expereience military personal that were available. these people who had acquired their knowledge of how to run large organizations from this ww2 period ... were then moving into positions of major responsibilities during the 70s and 80s (and attempting to replicate the extremely rigid, tightly controlled, top-down structure).

he contrasted it to the blitzkrieg where Guderian effectly wanted the person on the spot to make the decisions .... and is reputed to have asked for verbal orders only. This was supposdly to convey the idea ... that there weren't going to be any evidence after the fact for the auditors to go around and try and blame somebody or another for making less than perfect decisions.

random past Guderian posts:
https://www.garlic.com/~lynn/99.html#120 atomic History
https://www.garlic.com/~lynn/2001.html#29 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001.html#30 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001m.html#16 mainframe question
https://www.garlic.com/~lynn/2002d.html#36 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#38 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference
https://www.garlic.com/~lynn/2002q.html#43 Star Trek: TNG reference
https://www.garlic.com/~lynn/2003h.html#51 employee motivation & executive compensation

misc past boyd posts:
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

The BASIC Variations

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: alt.folklore.computers
Date: Sat, 27 Dec 2003 16:00:35 GMT
Charles Shannon Hendrix writes:
Unfortunately, it seems this stuff is rarely used right, and a lot of people get all tied up in who is "highest", or has the most arrows pointing at him.

But certainly you *can* document an organization and its people, processes, and communications, and make it useful.


there is also sometimes a tools issue ....

if a hierarchical database is used for maintaining the information ... then an organization which uses reporting/manager hierarchy for administrative purposes and not for goals & responsibilities ... may have an organization that is not straight-forward to represent.

a relational database might have a single "reports to" column ... and it may be extremely difficult to actually represent all the (possibly non-uniform) interactions that really occur (to some extent, design point for relational databases have been bank accounts with single uniform account index and uniform related fields).

it may be possible to *document* anything. however, for some tools, it may be extremely difficult to implement a repository that captures all the information that has been documented ... and keeps it up to date.

random past system/r refs:
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Sun researchers: Computers do bad math ;)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sun researchers: Computers do bad math ;)
Newsgroups: comp.arch
Date: Sat, 27 Dec 2003 19:23:58 GMT
Robert Myers writes:
But you weren't simulating anything. You were testing and getting your results into a manageable form. Testing is a circumstance where we often *are* reduced to random experimentation.

If the hardware satisfied perfectly the idealization of a model, or if you were willing to accept the risk that it didn't, you could obtain hard bounding estimates for system performance, and random sampling of points wouldn't do it. How sharp an estimate you got for system performance would depend on the details of the model and on how hard you were willing to work.

The working assumption of engineers is that there will always be "corner cases" that you may not discover by random testing and that there is nothing you can do about that.

If you have a credible analytical model of a system, you shouldn't *have* to live with unknown corner cases. Whether it is worth it to you to have that certainty about system performance depends on who you are and what you are doing, but if you want results that bound performance *for certain* (to within the limits of the model idealization), standard floating point arithmetic won't cut it. the whole discussion seem less preposterous.


the apl analytical model was simulating lots of stuff .... it was being calibrated by the actual benchmarks. the last 1000 benchmarks were based on the analytical model selecting new points to test based on past observations and then checking to see if the measured benchmarks correlated with the predications from the model (some random coverage ... but also looking for edge conditions).

the apl analytical model was also rolled into a sales support tool (called the performance predictor) on HONE (sales & field support world-wide ... the US HONE datacenter had on the order of 40,000 users defined ... at one time possibly the largest single system image operation around) ... misc. hone & apl refs:
https://www.garlic.com/~lynn/subtopic.html#hone

customers could extract certain configuration, workload, and performance charactertics from their actual operation and provide it to their sales support people ... which would load it onto the HONE system ... and then be able to ask what-if questions about changing workload &/or configuration. it was sales support ... in that the performance predictor could be used to help justify additional hardware sales.

Not A Survey Question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Not A Survey Question
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 27 Dec 2003 20:03:15 GMT
edgould@ibm-main.ameritech.net (Edward A. Gould) writes:
Just curious as ro which release of MVS that has shipped has tthe biggest code change (from IBM).

And a followup question which release of MVS caused the biggest headaches to the end users? ie Most code that had to be recoded and or recompalations. (I will stay neutral on this one as I have an opinion).

This came up while I was defending the MF to a comp sci "kid". Of course the "kid" had no concepts of 100's of thousands of programs but it was an interesting (to me) discussion.


slight drift ... when Amdahl was giving a talk at mit in the early '70s regarding the founding of Amdahl computers ... he was asked something about the business case getting funding for doing ibm clone.

the answer was something about (at that time) customers had something like $100b invested in ibm mainframe application software ... a significant portion of which would still be around for at least the next 30 years. the predication possibly even included any knowledge he might have had about ibm's direction (at the time) to leave 360/370 and move to FS ... misc. FS refs:
https://www.garlic.com/~lynn/submain.html#futuresys

one might guess that the number is ten times larger now ... and possibly value of business dependent on that software is another order of magnitude larger (i.e. at least $10t).

370 market chronology
https://web.archive.org/web/20050207232931/http://www.isham-research.com/chrono.html

from above, Amdahl corp founded 10/70 ... so I guess the MIT talk was spring 1971 (FS was starting to gear up).

random past threads regarding the subject:
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002j.html#20 MVS on Power (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2003.html#36 mainframe
https://www.garlic.com/~lynn/2003e.html#13 unix
https://www.garlic.com/~lynn/2003e.html#15 unix
https://www.garlic.com/~lynn/2003e.html#20 unix
https://www.garlic.com/~lynn/2003g.html#58 40th Anniversary of IBM System/360
https://www.garlic.com/~lynn/2003h.html#32 IBM system 370
https://www.garlic.com/~lynn/2003i.html#3 A Dark Day

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The BASIC Variations

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: alt.folklore.computers
Date: Mon, 29 Dec 2003 17:46:47 GMT
"Judson McClendon" writes:
His point is valid. Not every job can be done better by a computer. I once wrote a cost tracking system for a local county government. In their garage the system kept up with every expenditure on each vehicle: gas, oil, parts, labor, etc. The garage manager had an index card file with the part numbers for the lubricants and expendable parts (e.g. spark plugs & filters) for that vehicle. When he asked if I could computerize it for him, I told him sure, but there was no way I could improve on what he already had, unless the data on the cards needed to be available at multiple locations (it didn't). He had 100% uptime and instant access, his system couldn't have been less expensive, and his access and update procedures were so simple that everyone in his shop understood it perfectly and intuitively. He also didn't have to depend on anyone else for it to work. :-)

early in my college years, I had summer job as foreman on relatively small remote construction job (30 workers). one of the things i got in the habit of doing was getting there an hr before everybody else and walking the job doing inventory of everything in my head, rate of progress, use of materials, and projecting requirements for additional material (delivery had a 4-5 day lag, real emergencies might get something in 3days). there were all sorts of adjustments in real time. the last six weeks were a little hectic because early in the project there had been lots of weather delays and it was coming down to missed deadline penalties ... so we were on 84hr work week.

some computerized stuff could have helped for supply chain management and just in time delivery ... although i'm not sure about accuracy of computerized work model on this particular job and having it dynamically adjust for the change-over from 40hr to 84hr work week. along with that would have been issues about interface, information representation and method of inputing stuff like rate of progress, usage of materials, etc. The last six weeks, the extra 13th hour everyday was a real drag ... but it isn't clear that there is yet computerized paradigm that would be more efficient (but possibly better than somebody that wasn't spending the extra hour everyday).

originally the job schedule and deliveries had been worked out in some detail ... but a combination of the weather delays, unanticipated real-world problems, and then extended work week (trying to avoid deadline penalties) pretty much threw it completely out of whack.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Emulation Solutions

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Emulation Solutions
Newsgroups: comp.arch
Date: Mon, 29 Dec 2003 18:16:07 GMT
Sander Vesik writes:
heh. maybe they are moving off a 10+ year old machine that was a bit underwhelmed with workload so seeing as IBM doesn't make small z/series machines they have no other option?

you know, the pc/370 and similar things never did really catch on...


the xt/at/pc/370 were targeted at interactive market ... a lot of interactive response on the real mainframes was because of shared caching of lots of stuff across large number of users.

scaling the number of users down by factor of 100, real storage down by factor of 100, and disk speed by factor of 10 .... resulted in available real storage being less than working set of typical mainframe application ... which tended to throw the system into constant page thrashing ... and having ten times slower disks severely aggravated the pain of the page thrashing situation in an interactive scenario (aka scaling down from real mainframe to pc370 resuling in some non-linear effects).

current generation of pc machines can have real storage sizes and disk performance comparable to mainframe configurations.

side comment about mainframe emulation ... the original 360 machines were mostly microcoded engines .... many of the machines saw avg. of ten microcode instruction for every 360 instruction (i.e. the microcode engine had mip rate ten times higher than the resulting 360 mip rate). this is not that different from various of the current generation of mainframe simulators running on intel platforms.

collection of misc. mainframe microcode threads:
https://www.garlic.com/~lynn/submain.html#360mcode

slightly related is recent comment about >30yr old business case for mainframe clone
https://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
there is huge amount of mainframe software (costing trillions of dollars to originally develop) that still satisfies business requirements (and can be cheaper to continue running than rewriting).

misc. past posts related to xt/at/pc/370:
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/99.html#120 atomic History
https://www.garlic.com/~lynn/2000b.html#16 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#56 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2000b.html#70 Maximum Length of an URL
https://www.garlic.com/~lynn/2000d.html#39 Future hacks [was Re: RS/6000 ]
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000g.html#2 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001d.html#43 Economic Factors on Automation
https://www.garlic.com/~lynn/2001e.html#49 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001g.html#25 Root certificates
https://www.garlic.com/~lynn/2001g.html#34 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#37 Thread drift: Coyote Union (or Coyote Ugly?)
https://www.garlic.com/~lynn/2001g.html#59 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#9 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#32 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001n.html#49 PC/370
https://www.garlic.com/~lynn/2002b.html#38 "war-dialing" etymology?
https://www.garlic.com/~lynn/2002b.html#41 "war-dialing" etymology?
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#49 Blade architectures
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2002f.html#52 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002k.html#17 s/w was: How will current AI/robot stories play when AIs are
https://www.garlic.com/~lynn/2002l.html#27 End of Moore's law and how it can influence job market
https://www.garlic.com/~lynn/2002n.html#52 Computing on Demand ... was cpu metering
https://www.garlic.com/~lynn/2002q.html#52 Big Brother -- Re: National IDs
https://www.garlic.com/~lynn/2003.html#51 Top Gun
https://www.garlic.com/~lynn/2003e.html#50 MP cost effectiveness
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003g.html#56 OT What movies have taught us about Computers
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370
https://www.garlic.com/~lynn/2003i.html#23 TGV in the USA?
https://www.garlic.com/~lynn/2003j.html#55 Origin of "Function keys" question
https://www.garlic.com/~lynn/2003k.html#65 Share lunch/dinner?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

[IBM-MAIN] NY Times editorial on white collar jobs going

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [IBM-MAIN] NY Times editorial on white collar jobs going
overseas.
Newsgroups: bit.listserv.ibm-main
Date: Mon, 29 Dec 2003 18:44:01 GMT
jmaynard@ibm-main.conmicro.cx (Jay Maynard) writes:
The sad fact (and I say this knowing what it means for my prospects of employment) is that, like the American steelworker, the American IT worker is now too expensive to compete in the global market. We are going to have to accept that jobs are going to be neither as plentiful nor as well-paying as they once were. If we do not, we will reach the same end by simply not having the companies exist to work for.

there was slightly different take. ten years or so ago ... census published some numbers .. that something like 50 percent of the US highschool graduate aged people were functionally illiterate ... and that even if the schools continued at the current level ... the complexity of the world was increasing, effectively raising the minimum level for functionally literate.

at about the same time, supposedly half of the phd graduates in technical areas from universities in the state of cal. ... were foreign born ... and doing recruiting ... just about the only 4.0 were foreign born. it was documented that some number of them had their education paid for by their home gov. and they were advised to get jobs in certain strategic areas for 5-8 years and then return home to do technology transfer.

about the same time, I remember reading an article in Hong Kong paper giving pros & cons of province in mainland china going after world-wide high-tech outsourcing vis-a-vis similar efforts in india. A lot of focus was on differences in fundamdental infrastructure helping or hindering expanded high-tech business efforts. a lot of this was really sparked by y2k resource requirement bubble (temporary spike in requirement for resources that forged relationships which continued after the y2k effort had completed).

to large extent this all was kicked off, not by lower costs ... but scarcity of resources.

later in the '90s, it appeared that the internet boom could not have really occurred w/o those fifty percent of foreign born high-tech workers (filling fifty percent or more of the new jobs created by the internet boom).

slightly related past post on high-tech globalization
https://www.garlic.com/~lynn/2003c.html#65 Dijkstra on "The End of Computing Science"

note that there is cross-over between the thread on telecommuniting and high-tech globalization ... many of the same technologies that enable telecommuniting ... also make some kinds of work, distance insensitive (little difference between it happening at home or happening on the other side of the world). a couple past posts about telecommuniting technology being distance insensitive:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The BASIC Variations

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: alt.folklore.computers
Date: Mon, 29 Dec 2003 20:17:04 GMT
Nick Spalding writes:
I once met a man who ran a business supplying spare parts for heavy construction machinery. He had employed some chancer to set him up a system (using BLIS COBOL on a Nova 2) to handle his business who turned out to be incompetent and had been sent packing. He had picked up the pieces himself and had put together quite an impressive system. I asked him how he had handled stock control and he said that he had totally ignored it for the following reason.

"If I send someone in to count the number of widgets in stock he will come back and tell me a number. I send someone else and he comes back with a different number. I go in myself and count them and get a different answer again. What is the point of trying to computerise something like that?"


grocery stores have been using barcodes for stuff like that, however barcode can have some problem working in a more open (especially outside) environment with all sorts of issues like dirt, grease, no real check-out stand, etc.

a lot of the RFID stuff appears to be moving towards electronic barcode ... pushing standard for a very large number space ... and not greatly affected by dirt, grease, etc.

the initial scenario is noting when inventory drops below threshold and time to replenish ... the next scenario is noting the rate that inventory is used ... and doing supply chain and just in time delivery.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

value of pi

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: value of pi
Newsgroups: bit.listserv.ibm-main
Date: Mon, 29 Dec 2003 23:02:21 GMT
lancev@ibm-main.listperfect.com (lance vaughn) writes:
Sorry, John, But I must disagree. The value of Pi is quite exact. If it were not, it would be useless mathematically.

You are confusing resolution with exactness. Strictly speaking, Pi is a transcendental function with an exact value but without resolution (that is, its value cannot be represented by a finite number of decimal digits).


pick a simpler example ... 1/3rd.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Training

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Training
Newsgroups: bit.listserv.ibm-main
Date: Mon, 29 Dec 2003 23:06:57 GMT
john.mckown@ibm-main.uiciinsctr.com (McKown, John) writes:
I guess that is, in a sense, true of all of us. This has been true for me since I lost access to the console on the 1620. In most of my jobs, I've been "remote" from the computer, either on a different floor or building. The only difference is the reliability and speed of the connection. And the option to relieve stress by screaming at the programmers or managers, depending on who is bugging me <GRIN>.

i got 2741 at home in march of '70 ... and have pretty much have had one means or another for telecommuting from home since then.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The BASIC Variations

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: comp.lang.basic.misc,alt.lang.basic,alt.folklore.computers
Date: Tue, 30 Dec 2003 17:11:25 GMT
erewhon@nowhere.com (J French) writes:
Sounds interesting - but no context I am in : comp.lang.basic.misc,alt.lang.basic

I am aware that most PC/Micro innovations are just re-inventions of long established technology


is that like alt.beer.making?

there can be a multitude of issues for business continuity & disaster survivability. fault tolerant frequently is viewed at micro-level for continuing when there are failures ... it can also be fall-over and recovery at a higher, macro-level ... and can be general principles regardless of the technology used.

one of the scenarios was this thing called the payment gateway in support of this stuff that was going to be called e-commerce. in existing, circuit-based infrastructure, nominally, first level problem determination was doable by the call center in five minutes. one of the early payment gateway tests had trouble call ... that after three hrs of manual investigation closed the trouble ticket as NTF (no trouble found).

this was a straight-forward application implementation and stress-testing. my assertion has been that to turn a typical application into a service offering takes 4-10 times more code. In this case, went back to the drawing board and did about ten times as much work as the base application implementation (investigating all possible failure modes) and possibly four times as much code ... with objective that the application (handling payment transaction) either had to 1) recover from all possible infrastructure faults or 2) provide sufficient information where first level problem determination could be achieved in five minutes.

One of the issues was that while the iso8583 messages could be formated into tcp/ip packets ... there was lots of integrity provisioning that had been built up around circuit-based network that didn't exist in the translation to open internet packet-based network ... for instance who do you get a SLA (service level agreement) from?

at this level ... it is pretty much a methodology and approach (aka like not a recreational activity), independent of things like technology and computer languages.

misc. collected stuff regarding ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

various previous NTF/e-commerce retellings ...
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/2001h.html#43 Credit Card # encryption
https://www.garlic.com/~lynn/2003.html#37 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003b.html#53 Microsoft worm affecting Automatic Teller Machines
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/aadsm16.htm#20 Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before

misc SLA postings (although there is probably some drift into serial link adapter as opposed to service level agreement):
https://www.garlic.com/~lynn/aepay10.htm#53 First International Conference On Trust Management
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001e.html#48 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002.html#29 Buffer overflow
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#73 Blade architectures
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2003b.html#53 Microsoft worm affecting Automatic Teller Machines
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003h.html#15 Mainframe Tape Drive Usage Metrics
https://www.garlic.com/~lynn/2003l.html#49 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Emulation Solutions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Emulation Solutions
Newsgroups: comp.arch
Date: Tue, 30 Dec 2003 17:40:16 GMT
"Del Cecchi" writes:
Never was clear what you would do with that little 370, at least I couldn't figure out anything. And the little 370 follow ons didn't do too well either, 4331, racetrack, et al. Meanwhile S/34, 36, 38 sold a bunch as did comparable systems from other companies.

Was it price, performance, applications, operating system, software pricing? Who knows.


the mid-range, 4341 & vax saw a big explosion starting in the very late '70s ... in large part departmental & distributed computing. their follow-ons didn't do nearly as well as the market was starting to drift to PCs and workstations.

possibly one of the issues for 4331 was there was still some amount of manual care&feeding ... so the total cost of ownership for 4331 possibly wasn't all that different than 4341 (people costs starting to dominate hardware costs).

my repeated assertion is that the internet exceeded the size of the internal network sometime mid-85 because 1) introduction of gateway/internetworking on 1/1/83 to the internet/arpanet 2) proliferation of workstations and then PCs as network nodes.

for various reasons, there was strong pressure to maintain PCs and workstations attachment to mainframes as emulated terminals (as opposed to network nodes)

some vax specific numbers:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
misc. past departmental/distributed computing postings:
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix

some past threads regarding terminal emulation ... and some discussion that while it allowed for early & quick attachment of PCs and workstation, later on, large terminal emulation infrastructure inhibited moving them to more capable network participation:
https://www.garlic.com/~lynn/2000.html#6 Computer of the century
https://www.garlic.com/~lynn/2000b.html#35 VMS vs. Unix (was: Why are Suns so slow?)
https://www.garlic.com/~lynn/2000g.html#13 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002j.html#66 vm marketing (cross post)
https://www.garlic.com/~lynn/2002j.html#74 Itanium2 power limited?
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#24 computers and stuff
https://www.garlic.com/~lynn/2002k.html#29 computers and stuff
https://www.garlic.com/~lynn/2002k.html#30 computers and stuff
https://www.garlic.com/~lynn/2002l.html#53 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#41 ibm time machine in new york times?
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#33 difference between itanium and alpha

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Emulation Solutions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Emulation Solutions
Newsgroups: comp.arch
Date: Tue, 30 Dec 2003 21:25:35 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
And therein hangs a tale ....

IBM survived that obviously catastrophic strategic decision remarkably well, given its pervasiveness. Most observers that I talked to were predicting IBM's bankruptcy, on the grounds of heading in a direction where the market was shrinking as you watched.


ref'ed posts:
https://www.garlic.com/~lynn/2003p.html#32 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions

the terminal emulation option opened a significant market for the ibm/pc ... possibly jump starting its dominance in the market. for effectively the same price, form factor, and desk area, you could get a keyboard and display that replaced the existing mainframe terminal ... and also provided the ability to do some local computing. in that sense, terminal emulation was the initial silver bullet for the ibm/pc. then with a growing install base, it provided incentive for lot more application development (that then snow balled, install base growth, more application development; more application development, install base growth)

as you started to see a lot more applications deployed on the desk top, the terminal emulation (that somewhat kick-started the market penetration) became something of a hindrance. Majority of the data was still back on mainframe disks ... and the terminal emulation paradigm was beginning to severely throttle the ability of desktop applications from directly accessing/using that data.

the mainframe disk division developed a number of products that bypassed the terminal emulation implementation and allowed high thruput access between the desktop and the backend mainframe disks. these products were vetoed by the communication division on the theory that the communication division owned the mainframe interface to the desktop (and they had a huge install base of terminal emulation products to protect).

the difficulty of the desktop being able to access the backend mainframe data (something akin to modern day SAN) help fuel huge demand for local LAN servers as well as desktop-based disk capacity.

sometime in the mid/late 80s a senior technical person from the mainframe disk operation gave a talk at an internal world-wide communication conference in raleigh. the talk was supposedly on something about thruput analysis of some mainframe terminal controller ... but was actually devoted to the explicit claim that the executive that headed up the communication division was going to be responsible for the demise of the mainframe disk operations ... because the data was migrating out of the glass house and various disk division developed products for effectively serving the distributed and desktop market had been vetoed by the communication division.

in that sense, it wasn't so much a strategic decision ... it was just inter-divisional rivalry ... and that mainframe disk division is effectively gone.

slightly related is that SNA is a triple oxymoron ...
it isn't a System, it isn't a Network, and it isn't an Architecture

.... it basically referred to host-based terminal communication infrastructure. one of the closest things to having network support in that era was APPN ... and the communication (SNA) division non-concurred with its announcement (we need no stink'n networking).

a number of posts related to 3-tier architecture, sna, and/or saa:
https://www.garlic.com/~lynn/subnetwork.html#3tier
some have claimed that saa was a strategy to help try and stem the client/server invasion and maintain the terminal emulation paradigm. multi-tier further aggravated the migration of data processing out of the glass house.

misc. past threads mentioning the APPN veto issue:
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#43 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002g.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003d.html#49 unix
https://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003o.html#48 incremental cms file backup
https://www.garlic.com/~lynn/2003o.html#55 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#2 History of Computer Network Industry

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

virtual-machine theory

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual-machine theory
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 30 Dec 2003 22:00:54 GMT
gilmap writes:
You and Shmuel make a persuasive practical argument, and I'll add my own previously stated opinion that TIG is strongly biased in favor of Q for any practical real-world virtual machine facility.

I suppose I was overly impressed by reading some decades ago an article that argued that the PDP-6 was not virtualizable because of the JRST instruction which functioned as LPSW in the supervisor state and as BR14 in the problem state, and was heavily used in both senses. This made it impractical to implement a virtual machine facility on the PDP-6. Certainly one could code a PDP-6 simulator which would run on the PDP-6, and even defeat Q at TIG, but I suspect even VMT adherents would disallow this on a basis largely of performance and utility. But such arguments are what you recognize as "not well formed".


my training was that the execution of instructions running in the virtual machine appeared to conform to the principles of operation .... i.e. if the virtual guest thot it was doing a supervisor state instruction ... it appeared to operate as if the supervisor state instruction executed as per the principle of oeprations.

it was not necessary that it be totally impossible to tell whether the guest was running on a real machine or a virtual machine ... it was just necessary that when instructions executed, they performed as the POP specified ... if a LPSW instruction was defined as operating a specific way in supervisor state ... then the instruction appeared to exactly operate that way (even if it was running in virtual machine mode).

consistent with that definition was the use of the diagnose instruction to extend the virtual machine environment. the principle of operations states that the implementation of the diagnose instruction is undefined (in the principles of operation) and model dependent. as a result, all sort of virtual machine features and extensions were defined using the diagnose instruction ... based on the assertion that the diagnose instruction was executing on a mainframe machine model defined as the VIRTUAL MACHINE model.

again, it wasn't necessary that virtual machine operation be totally transparent to the virtual guest ... it was that all instructions continued to operate exactly as defined in the principle of operations; if the guest thot it was executing an instruction in supervisor state ... then the execution of that instruction behaved as if it was in supervisor state (whether it was running on the bare iron or in a virtual machine).

Initially this was achieved on 360 where all supervisor state instructions that when executed in problem state, would result in an exception caught by the virtual machine supervisor. The virtual machine supervisor then simulated that instruction and then returned control to the virtual machine execution (in problem state). Later 370s got virtual machine microcode assists .... the real machine microcode executing a supervisor state instruction could determine whether to execute it in true supervisor state or in emulated virtual machine supervisor state.

slightly related recent thread in comp.arch:
https://www.garlic.com/~lynn/2003p.html#32 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions

misc. previous virtual machine theory related posts:
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001b.html#32 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001d.html#54 VM & VSE news
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#7 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003m.html#36 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#9 virtual-machine theory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

comp.arch classic: the 10-bit byte

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: comp.arch classic: the 10-bit byte
Newsgroups: comp.arch
Date: Wed, 31 Dec 2003 00:09:22 GMT
old_systems_guy@yahoo.com (John Mashey) writes:
This is probably yet another turn of the wheel.

Offhand, at least several mainframes of the 1960s/1970s had a somewhat similar feature, i.e., more software management of several speeds of memory.

For example, a CDC 6600 (circa 1965) had 128K words (60 bit) of fast memory, and 2 Mword of Extended Core Storage. CDC 7600s had a similar structure, albeit with different numbers.

IBM 360/75, 3606/67, etc had LCS (Large Core Storage :-); we had a 360/67 with 512KB main memory and 8MB LCS.

As we start getting interesting-sized on-chip RAM, we'll almost certainly start seeing things like this suggested again.


and then 3090 had expanded store .... which was basically regular memory technology hung on a wide bus and accessed with synchronous instructions that copied pages to/from regular memory.

when kingston was trying to attach HiPPI off 3090 .... the regular I/O interface couldn't substain the data-rate so they cut into the side of the 3090 expanded store bus ... and effectively did HiPPI i/o programming with analogy to poke instructions ... i.e. copy a page of regular storage (containing the hippi commands) to a reserved address on the expanded store bus.

search engine for expanded store ...

3090 with 256mbyte regular memory and 1gbyte expanded store
http://www.cacr.caltech.edu/Publications/techpubs/PAPERS/ccsf003/project.html
and abstract for a talk at stanford on hierarchical memory model:
http://theory.stanford.edu/~aflb/1986-87.html#1986-87.11

I was hoping to quickly find ibm description of instructions that copied pages to/from expanded store.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

The BASIC Variations

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The BASIC Variations
Newsgroups: comp.lang.basic.misc,alt.lang.basic,alt.folklore.computers
Date: Wed, 31 Dec 2003 03:50:39 GMT
"Charlie Gibbs" writes:
You're new here, aren't you? Barb's been telling DEC stories in this newsgroup for years. Check the Google archives.

and for something completely different .... thread in comp.arch on
mainframe emulation solutions

... check google for Del Cecchi recent comment about Nick Maclaren's comments about my comments in the tread;

my comments:
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions

which were in turn, with regard to earlier comments by Del.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Emulation Solutions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Emulation Solutions
Newsgroups: comp.arch
Date: Wed, 31 Dec 2003 16:45:23 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
The strategy I was referring to was closer related to SAA, had little to do with SNA and nothing to do with disks. It was the one that blocked the OS/2 people from providing a system that did not need a 'mainframe' (i.e. running MVS, CMS or OS/400) as a core. There were some extremely heated debates between SHARE (Inc and Europe) and very senior IBM executives over this, including at a technical level, and we got nowhere. Eventually, it helped to kill OS/2, but its demise was caused by the ludicrous Personality project.

SAA came later in the 80s ... a friend that i had worked with on ecps microcode for 138/148 was then in charge of saa and had a big corner office in somers. we had done the first 3tier presentation at executive presentation of all the IS managers for a large multi-national corporation and got extreme backlash from various internal interests (in part because of huge bandwidth and function being put into the distributed environment). we would drop by his office periodically and needle him about saa not going to work.

I think that both the new lechmere center. and the new lotus bldgs and been built down on the charles ... and you could walk by the outside of the lotus bldg and see little advertising boxes ... one for lotus on the mainframe. part of SAA seemed to be that any application (in common use on the PC) ... was being ported to backend mainframe.

In the '70s, lechmere was large warehouse type bldg and i use to walk from north station thru the lechmere parking lot to 545 tech sq. In the '80s and '90s i used to repeat parts of that walk when I happened to be in town. One day, I was walking by as they were prying the letters off the thinking machine's bldg. and stopped and watched (wondering if I could talk the guy into letting me have some of the letters).

random past ssa posts:
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/2000e.html#42 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#19 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002e.html#2 IBM's "old" boss speaks (was "new")
https://www.garlic.com/~lynn/2002j.html#34 ...killer PC's
https://www.garlic.com/~lynn/2002j.html#51 Next step in elimination of 3270's?
https://www.garlic.com/~lynn/2002k.html#33 general networking is: DEC eNet: was Vnet : Unbelievable
https://www.garlic.com/~lynn/2002p.html#34 VSE (Was: Re: Refusal to change was Re: LE and COBOL)
https://www.garlic.com/~lynn/2002p.html#45 Linux paging
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
https://www.garlic.com/~lynn/2003c.html#20 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Mainframe Emulation Solutions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Emulation Solutions
Newsgroups: comp.arch
Date: Wed, 31 Dec 2003 17:09:05 GMT
"Del Cecchi" writes:
Lotus 123 was the "killer app" for the PC as I recall. 3270 emulation was, as far as I know, popular much later. From what I could see and hear and read, majority of PC users wanted to cut the cord to the IS operation. Even inside IBM that was true. And the mainframe types were perfectly happy with 3277 or 3279. Why screw with a pc when you could have the real thing.

lotus 123 was killer app for the pc part of the machine and also help it in the non-terminal emulation market segment.

in '80s my brother was regional marketing rep for apple and periodically when he was in town ... i would get invited to apple dinners ... sometimes included mac developers (before mac was announced). I remember having arguments that mac really needed mainframe terminal emulation ... asserting that single office desk footprint feature added to install base. At that time, I would characterize the counter-argument as mac was designed for only being used on the kitchen table.

in the '70s numerous of us

1) had bought and installed FIFO keystroke buffers on the 3277 ... to handle the keyboard lockup problem if you happen to be hitting a key at the same time the system decided to refresh the screen (which then required hitting reset to unlock the keyboard).

2) little wirewrap work inside the 3277 keyboard ... to change (decrease) repeat key delay and (increase) repeat key rate. I change my settings to both .1 ... which (for even cursor motion) overran the local 3277 screen update rate .... i.e. I had to get used to the cursor continuing to move after i lifted the key ... and effectively be able to judge how much to lead the cursor so that it would eventually coast to stop at the appropriate position.

3278/3279/3274 lost that ability since all the control electronics were moved out of the terminal and back into the 3274 controller.

it wasn't until i got terminal emulation on pc ... that i got back reasonable acceptable human factors for host environment ... and basically could have relatively same host interface environment at home with pcterm ... and at work with 3270 simulation. I liked the idea of single keyboard & screen for my work (I currently have 19in screen and keyboard/mouse that has interface cable to two different PCs ... and hot-switch between the two by quick double hit on scroll lock key).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Saturation Design Point

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Saturation Design Point
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 31 Dec 2003 17:29:02 GMT
Chris_Craddock@ibm-main.bmc.com (Craddock, Chris) writes:
AFAIR it was an indication of whether the box was physically partitionable (separate power/cooling etc) so under that scheme a 3084 was dyadic because it could be lobotomized into 2 x 3081 boxes, but a 3081 could not be broken down into 2 x 3083 boxes, so the 3081 was NOT a dyadic box - even though it looked pretty much like 2 x 3083s internally.

IBM blurred the lines by doing MES upgrades on non-dyadics to make them into larger complexes. These often involved total replacement of more or less everthing inside the frame, but with the same serial number so you could retain the depreciation etc.

Somewhere during the life of the 3090 line they dropped the term altogether.


there was never supposed to be a 3083. the other characteristic was that the two-processor SMPs in the 370 line slowed each processor down by 10 percent to give the cache spare cycles to handle cross-cache invalidation signals i.e. a two processor 370 was rated at 1.8 times the hardware performance of a 370 uniprocessor because of the cache slowdown to accommodate cross-cache invalidate signals .... note that the actual handling of cross-cache invalidation signals could slow the processor down even more.

the problem was that ACP/TPF at the time didn't have support for multiprocessors and needed flat-out uniprocessor performance. Some tpf installations ran on 3081 under vm ... with VM provided support for running two copies of TPF ... effectively one on each processor. cutting the operation of the 2nd cpu eliminated the need for multiprocessor cache slow-down ... aka 3083 processor was rated at about 15percent faster than a single 3081 processor.

also in the 3081 time-frame there started to be a lot of work on various kernels to align kernel storage allocation and buffers on cache line boundaries. i believe there were studies showing five percent or better kernel performance improvement from cache-line aligned storage allocation. non-aligned storage could mean that two different cpus were operating on different storage locations that happened to occupy the same cache line ... resulting in lots of cache-line thrashing going on.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

comp.arch classic: the 10-bit byte

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: comp.arch classic: the 10-bit byte
Newsgroups: comp.arch
Date: Wed, 31 Dec 2003 21:23:55 GMT
hack@watson.ibm.com (hack) writes:
My vague recollection is that they were significantly faster than 100us (that was early 1990s), and that they were not interruptable. They are of course privileged, and the supervisor code executing them may well be running disabled for interruptions anyway, so it wouldn't matter. In any case, interrupts could still be fielded on another processor.

At the time the instrs were secret, but I was able to discover and guess how they worked anyway, because they are so simple. On one system the comp center allowed me to attach virtual XSTORE to my VM userid, and I taught the paging system of my experimental OS to use it. As of z/Arch PoOps the instrs are public.


my recollection was that on the 3090 they were synchronous (supposedly on the order of 20microseconds), the original claim being that performance just had to be significantly faster than the pathlength to setup a page I/O transfer request, schedule the request, execute the request, and clean-up after the request finished (frequently on the order of thousands of instructions). the additional observation was that it was much more cache locality friendly.

an early wish-list that i had ... was why couldn't i do direct i/o in/out of expanded store ... setting up requests that transferred huge blocks of expanded store pages w/o requiring them to be dragged thru regular memory .... somewhat akin to page migration that I shipped as part of the resource manager (long ago and far away):
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
which did require dragging pages off of one kind of device into regular memory and then back out to other kinds of devices.

HiPPI switches were coming up with 3rd party transfers which was also starting to be able to support things like direct disk to tape for backup .... w/o having to drag the data thru memory (and of course kingston was also carving 3090 HiPPI support out of the side of the expanded store bus).

I had previously done a quicky search engine hoping to pick up the pop URL for the instructions ...
https://www.garlic.com/~lynn/2003p.html#41 comp.arch classic: the 10-bit byte
a little more effort with the z/arch POP contents turns up page in (b223):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR001/10.29?SHELF=DZ9ZBK01&DT=20020416112421
and page out (b22f):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR001/10.30?SHELF=DZ9ZBK01&DT=20020416112421

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

next, previous, index - home