List of Archived Posts

2000 Newsgroup Postings (11/24 - 12/31)

TSS ancient history, was X86 ultimate CISC? designs)
Cryptogram Newsletter is off the wall?
TSS ancient history, was X86 ultimate CISC? designs)
virtualizable 360, was TSS ancient history
virtualizable 360, was TSS ancient history
e-commerce: Storing Credit Card numbers safely
virtualizable 360, was TSS ancient history
360/370 instruction cycle time
360/370 instruction cycle time
360/370 instruction cycle time
360/370 instruction cycle time
360/370 instruction cycle time
360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
IBM's mess (was: Re: What the hell is an MSX?)
IBM's mess (was: Re: What the hell is an MSX?)
360/370 instruction cycle time
360/370 instruction cycle time
IBM's mess (was: Re: What the hell is an MSX?)
360/370 instruction cycle time
Could CDR-coding be on the way back?
360/370 instruction cycle time
No more innovation? Get serious
IBM's mess
A question for you old guys -- IBM 1130 information
SSL as model of security
Who Owns the HyperLink?
Could CDR-coding be on the way back?
Could CDR-coding be on the way back?
Could CDR-coding be on the way back?
Could CDR-coding be on the way back?
stupid user stories
Multitasking and resource sharing
does CA need the proof of acceptance of key binding ?
does CA need the proof of acceptance of key binding ?
does CA need the proof of acceptance of key binding ?
stupid user stories
stupid user stories
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Could CDR-coding be on the way back?
No more innovation? Get serious
Egghead cracked, MS IIS again
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
A new "Remember when?" period happening right now
4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Use of SET?
Use of SET?
Egghead cracked, MS IIS again
> 512 byte disk blocks (was: 4M pages are a bad idea)
> 512 byte disk blocks (was: 4M pages are a bad idea)
Egghead cracked, MS IIS again

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: alt.folklore.computers
Date: Sat, 25 Nov 2000 01:20:01 GMT
... i was going thru some old boxes related to early CP/67.

Lincoln Labs was the first place that cambridge installed CP/67 sometime in '67 (the university that I was at was the 3rd cp/67 installation, the last week in jan. 1968).

In any case, ran across Lincoln Laboratory Multi-Programming
Supervisor 360D-5.1-004 ... SHARE contributed program library.

Date of Submittal, May 26, 1966 by Joel Winett and Frank Belvin.

Also LLMPS was the initial core infrastructure used by Michigan for the Michigan Terminal System (MTS).

In addition to CP/67 (TSS/360 alternative) developed for the 360/67 (by cambridge science center), another system developed for the 360/67 was MTS done by Michigan.

random refs:
https://www.garlic.com/~lynn/2000.html#89

From the LLMPS manual
E. Preparing a Supervisor System

The Lincoln Laboratory Multi-Programming Supervisor is formed from the main supervisor program SUPER, the tale subprograms TABLES and BUFFS, the supervisor subprograms FNDJTL, FIDCQ, and FBJTJL, and the problem state subprograms EXIT and JBRP. The operation of the supervisor is controlled through the use of a set of tables, the Job List, the Job Table, the Device List, and the Device Table. The format of these tables are defined using equalities and all references to the fields in a table are made symbolically.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Cryptogram Newsletter is off the wall?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cryptogram Newsletter is off the wall?
Newsgroups: sci.crypt
Date: Sat, 25 Nov 2000 15:12:02 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
If you mean that the TCB created by a SYN should be deleted if the SYN-Ack sent toward the source of the SYN elicits an ICMP Unreachable, then I disagree. At least some TCP stacks honor ICMP Unreachables when the TCP state machine is not in Established. In other words, TCBs created by orphan SYNs are already going poof as the result of ICMP Unreachables in what I define as good TCP stacks.

I guess I've only had to deal with TCP stacks that weren't good.

W/o opaque identifier that TCP pushed down and was returned in ICMP responses ... so when IP pushed the ICMP up ...the upper layers could filter out ICMPs not for them ... somebody has to filter ICMPs that might be pushed up one or more instances.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TSS ancient history, was X86 ultimate CISC? designs)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSS ancient history, was X86 ultimate CISC? designs)
Newsgroups: alt.folklore.computers
Date: Sat, 25 Nov 2000 15:32:58 GMT
random other urls on 360/67 from search engines
http://accl.grc.nasa.gov/archives/index.html
https://www.multicians.org/thvv/360.67.html
http://www.itd.umich.edu/~doc/Digest/0596/index.html
https://web.archive.org/web/20010220084001/http://www.itd.umich.edu/~doc/Digest/0596/index.html
http://www.clock.org/~jss/work/mts/30years.html
http://www.cmc.com/lars/engineer/computer/ibm360.htm
https://web.archive.org/web/20010211120047/www.cmc.com/lars/engineer/comphist/ibm360.htm
https://web.archive.org/web/20030813224124/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/index.html
http://homes.cs.washington.edu/~lazowska/frontiers/progress/

& of course Melinda's page
https://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

virtualizable 360, was TSS ancient history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtualizable 360, was TSS ancient history
Newsgroups: alt.folklore.computers
Date: Sat, 25 Nov 2000 15:45:37 GMT
nospam@nowhere.com (Steve Myers) writes:
fact) VSE. For systems where this was not possible (e.g., VM under VM, or MVS under VM) IBM developed a lot of hardware assist capabilities.

note that this has carried over into current generation of ibm mainframes ... LPARS/logical partitions, effectively VM subset hidden in the hardware of the machine.

There was actually two distinct problems with running virtualized virtual memory systems ... one was the "shadowing" of all the virtual memory hardware components aka software analogy of the virtual-memory lookaside buffer giving virtual->real translation ... the virtual->real tables in the virtual machine had to be emulated in the VM supervisor because the virtual machines tables were really translating from one virtual space to another virtual space, the real CP had to shadow the tables to provide the real addresses.

The other is that a virtual machine page replacement algorithm approximating LRU ... and any real machine page replacement algorithm approximating LRU ... good get into real pathological situations. Effectively the CP page replacement algorithm was doing LRU replacement under the assumption that the page access in virtual memory were characteristic of "least recently used". However, a virtual machine page replacement algorithm would start to look more like "most recently used" ... i.e. the least recently used page was the page that was most likely next to be used. LRU alrogithms running under LRU algorithms violate the lower level LRU assumptions.

The VS1 had a little more than operating in real-mode. It did lay-out its virtual memory on one-for-one bases in a virtual machine ... but it also supported psuedo page fault & psuedo page complete interrupts being reflected from the CP supervisor (i.e. CP could notify the VS1 supervisor when applications running under VS1 had page faults ... allowing VS1 to perform task switch).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

virtualizable 360, was TSS ancient history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtualizable 360, was TSS ancient history
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 27 Nov 2000 14:46:44 GMT
nouce@multics.ruserved.com (Richard Shetron) writes:
1) If you wrote your own CCW's (Channel Commands words), these are a chained list of I/O instructions executed by the I/O unit(s) to perform I/O. Disk read/write, printer, terminal, etc. OS-360 allowed programs to write their own, but they would be validated before execution. With CP-67, you could hand CP-67 a CCW list and if you got the timing right, you could then modify the list between the time CP-67 verified the list and locked all pages invovled in I/O into memory for execution. Doing this you could modify the list so that CP-67 would try to lock more virtual pages into ram then real ram, result was CP-67 would block everything waiting for more memory for pages to lock.

causing pinning all memory was a (relatively) well-known denial of service attack that got fixed several times (possibly more so by the various service bureaus that started offering cp/67 as time-sharing service in the late '60s ... than the standard product).

on the CSC machine there was frequently several MIT students. One day the CSC system crashed three times before it was identified and fix generated. the genesis of the failure was a particular student who was called and asked to stop performing the particular operation until the fix was installed. the student continued to perform the operation, crashing the system a couple more times before their account was permantly removed. the student later complained to their faculty adviser that their access shouldn't have been permanently removed regardless of what they did.

traditional approach in situations like that ... was the student was given the source and asked to fix the problem that they had uncovered but there wree a few cases where the only interest was in seeing how many times the system could be crashed.

27 crashes in single day:
http://www.lilli.com/360-67 (corrected) https://www.multicians.org/thvv/360.67.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

e-commerce: Storing Credit Card numbers safely

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: e-commerce: Storing Credit Card numbers safely
Newsgroups: comp.security.misc,alt.security,alt.computer.security
Date: Mon, 27 Nov 2000 22:04:21 GMT
"Joseph Ashwood" writes:
While many of the suggestions given are in fact quite useful (I consider the hash and public key crypto suggestions to be the best of the lot) there are alternatives that are more secure. For example both American Express and Visa have issued smart cards to customers that if they use them correctly would be vastly more secure. Also I am aware of Visa doing some research in online CC security, you may want to ask them about their joint project with Arcot (disclaimer: I am employed by Arcot Systems Inc). I am sure Mastercard and American Express have similar goings on (Visa probably has a backup plan as well). Either of these methods would make it impossible for even you to read the numbers (in some cases you would never possess them), making it extremely secure. Joe

the other approach is by X9.59 which defines that all x9.59 account numbers are only usable in authenticated (signed) transactions (i.e. recommendation that a non-authenticated transactions involving an x9.59 account number be declined).

to a large extent, x9.59 (and similar protocols) are privacy neutral (i.e. not divulging privacy information) and eliminate the account-number as a point of attack ... since they are defined as being usable only in authenticated transactions.

effectively, account numbers usable in non-authenticated transactions become shared-secrets ... so that knowledge of the existance of an account number numerical value needs to be restricted.

random refs:
https://www.garlic.com/~lynn/2000f.html#72
https://www.garlic.com/~lynn/aadsm2.htm#straw
https://www.garlic.com/~lynn/aadsm3.htm#cstech13
https://www.garlic.com/~lynn/

X9 is ansi financial industry standards body
http://www.x9.org/

TC68 is the ISO (international findancial standards body, of which X9 is a member)
http://www.tc68.org/

ABA (american bankers association) serves as the scretariat of both X9 (US) and TC68 (iso/internation) financial standards body.
http://www.aba.com/

disclaimer ... I was a member of the X9A10 working group and helped write the X9.59 standard. The charter given the X9A10 working group was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-base) transactions (not just credit cards and/or even all payment cards).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

virtualizable 360, was TSS ancient history

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtualizable 360, was TSS ancient history
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 28 Nov 2000 17:28:54 GMT
how 'bout virtualizing 360 on intel platform ... blurs the line even further with what is a mainframe
http://www-304.ibm.com/ibmlink
https://web.archive.org/web/20240130182226/https://www.funsoft.com/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Tue, 05 Dec 2000 05:53:01 GMT
eells@US.IBM.COM (John Eells) writes:
Yeah, I guess I'm too used to typing "390." S/370 168 (actual machine type was 3168) is indeed what I meant. Don't ask me why I remember its cycle speed but not those of other processors I worked on or any of their clock speeds.

(BTW, I'm not 100% sure; but IIRC, the 165 and 168 had the same cycle time and both were faster than the Mod 65. I can no longer remember whether the 165 and 168 E-Units were that similar (whether they had the same number of I-streams or whether the 165 had Branch Prediction, for example), even though they shared a common architecture, except that I'm pretty sure the 165 did not have High-Speed Mulitply. The biggest difference between the two, of course, is that the 168 had semiconductor storage rather than core...but this is as off-topic as I will stray.)


the 370/155+ machines were horizontal microcoded machines ... the </370/155 machines were vertical microcode machines (basically a native machine architecture that avg. around 10 native machine instructions to execute every 370 instruction ... not all that different from the hercules project that is emulated 370 on a intel processor).

the 360/165 -> 360/168 ... added faster memory from ???? to around 400 nanoseconds ... and eventually 360/168-3 had 64kbyte cache (presumably fewer cache misses as well as lower elapsed time when there was a cache miss).

various kinds of things done to the hardware & microcode dropped the avg. cycle time per instruction from around 2.1 avg (80ns) cycles per 370 indstruction on 165 to about 1.6 avg (80ns) cycles per 370 instructgion on 168 (faster memory, bigger cache, and improved hardware/microcode reducing avg. cycles per 370 instruction all contributed to 168 being faster than 165).

the lower end machines with vertical microcode ... and 10 instructions per 370 indsturction ... needed 5mip (native) machine to yield 1/2 mip 370 .. the 370/125 needed something like 1mip native to yield 1/10th that in 370 cycles.

the type of instructions in operating system supervisor typically dropped into the microcode on these machines byte-for-byte (i.e. 6000 bytes of 370 code was about 6000 bytes of 370/148 microcode). The basic ecps packaged then saw 10:1 speedup by having critical pieces of the operating system implemented in native machine microcode (other features saw more because the two domains were completely different and didn't have save/restore registers across the boundary).

random ref:
https://www.garlic.com/~lynn/94.html#21

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Wed, 06 Dec 2000 15:26:43 GMT
Rick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
---------------<snip>----------------
The 370/135 did not have enough memory to run MVS. The 370/145 was not the smallest one, just the first announced.
---------------<unsnip>--------------

IIRC, there were also a 370/125 and a 370/115, both of which were suitable for DOS or DOS/VS but not anywhere near large enough for MVS.


I got VM/370 deployed on 370/125 (256kbytes of memory) ... but had to get a bug fixed in the 370/125 microcode (which hadn't been detected before ... even tho the machines were in the field). During the VM/370 system generation procedure ... (but not during boot/ipl or system running) ... it uses an MVCL with length of 16mbytes for the "to" operand and zero for the from operand to clear storage. Normal (pre-370) instructions checked the ending address and didn't execute the instruction if there was a violation. MVCL (& CLCL) were suppose to incrementally do the instruction ... w/o first checking the ending address. 370/125 had a "bug" in MVCL microcode which first checked the ending address and didn't execute the instruction at all.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

360/370 instruction cycle time

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Wed, 06 Dec 2000 17:19:21 GMT
jbroido@PERSHING.COM (Jeffrey Broido) writes:
Leonard, I will most definitely need to find that Functional Characteristics manual for the 360/67. As hard as I try to recall another number, 50,000,000 sticks in my head. Remember, this was for RR instructions or, more specifically, the LR instruction, which is one of the simplest. The aggregate MIPS rate, of course, was a mere fraction of that.

Bomber


I remember 65/67 functional characteristics having approx. .6ms avg per RR instruction ... and RS instructions were in the 1.6ms avg. per instruction range.

the 65/67 fetched instructions in double word at 750ns per double word. RR time (2 byte instruction) would have included 1/4th the 750ns instruction fetch time (187ns) plus the actual instruction decode and execution ... for about 600ns. rs instruction (at 4bytes) would have included 1/2 the 750ns instruction fetch (375ns) plus operand storage fetch/store (750ns) plus instruction decode and execution (i.e. 1125ns plus instruction decode and exeuction). There was also something like 100-150ns more if rs instruction used both base&index addressing (as opposed to just base reg. addressing).

the 67, when operating in virtual memory mode, increased the effective memory access time by 150ns to 900ms allowing for the associative array translation from virtual->real. rr instructions then became 1/4th the 900ns for instruction fetch (225ns) plus instruction decode and execution. rs instruction became 1/2 900ns for instruction fetch and 900ns for operated fetch/store plus instruction decode and execute (450ns+900ns ... 1350ns plus instruction decode and execute).

the associative array was an 8-entry ... fully associative virtual->real lookup. A miss in the associative array ... of course would have resulted in access to cr0 for the segment table, index the segment table, get the pagetable pointer, index the pagetable, retrieve the page virtual->real translation and put it into the associative array ... and then decode the address.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Wed, 06 Dec 2000 17:56:53 GMT
Tony Harminc writes:
The 165 and 168 both had only a 2-way I-stream with branch prediction. High speed multiply was an option on the 165. The basic cycle time for both was 80ns, but the later 168s were overall much faster, particulary when running a multi address-space OS like MVS or VM, probably because the STO stack was small and inefficient on the 165-II.

I think boundary alignment got faster on the 168; on the 165 the typical timing formula for an RX instruction included a term like "plus 2.54BA1" where BA1 was 0 if the operand was appropriately aligned, and 1 if not. Quite a severe penalty, considering that the formula was in microseconds!


the tlb (page virtual->real translation) on the 168 was 128 entry ... i believe 4-way associative ... with 7-deep sto-stack (each tlb entry had 3 bit identifier marking it as either empty or associated with one of 7 specific virtual address spaces).

it was somewhat tailored to MVS since one of the address bits used to index the tlb was the 8mbyte address bit. VM virtual address spaces were typically less than 8mbytes and so only have the tlb entries tended to be available. MVS laid out the kernel in virtual memory with supervisor <8mbytes & application >8mbytes ... so it tended to have half the tlb entries for the supervisor and half the tlb entries for applications.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Wed, 06 Dec 2000 18:18:51 GMT
Anne & Lynn Wheeler writes:
I remember 65/67 functional characteristics having approx. .6ms avg per RR instruction ... and RS instructions were in the 1.6ms avg. per instruction range.

30, 40, 50 had inboard channels ... memory bus was shared between cpu and I/O ... and cpu processor tended to cycle steal back & forth between acting as cpu and as channel (135, 145, 155, 158, etc also had inboard channels with processor cycle stealing).

65(& 67) had outboard channels ... so had more cpu available for program execution.

the 67 multiprocessor was unique. It had tri-ported memory and an independent channel controller ... allowing simultaneous access for both cpus and the channel controller. tri-ported memory slowed memory access down by about another 15% (i.e. even a half-duplex 67 ... operating in single processor 65 mode running MFT ... memory access took 15% longer than real 65).

the interesting thing was that a half-duplex 67 had significantly higher workload thruput than a simplex 65/67 ... for workload that was both 100% CPU and concurrent heavy I/O (typical of many cp/67 installations). The difference was that the 15% memory bus access slowdown for each memory access (for the tri-ported memory) was more than offset by the elimination of the memory bus cycle stealing that I/O activity caused in single ported memory).

The 115/125 were somewhat unique low-end processors (and boeblingen is reputed to having gotten their hands slapped for the design). The 115/125 supported a 9-port common memory bus that typically had 3-5 (up to 9) microprocessors all sharing the same memory bus in a generalized multiprocessor architecture. The 115 all had the same microprocessor installed at each memory bus position ... the difference was that the different microprocessors had different program loads ... i.e. one of the microprocessors had 370 instruction simulation program load ... the other microprocessors had control unit and/or other function/feature program loads. The 125 differed from the 115 only in that the microprocessor that had the 370 instruction simulation program load was about 25% faster than the other micorprocessors (given the 125 about a 25% faster 370 execution than the 115).

The follow-on to the 158/168 was 3031, 3032, & 3033. The big difference was that the 3031, 3032, & 3033 had outboard channel directors (last seen in the 360/67). The 303x channel directors were essentially the 158 horizontal micro-code engine w/o the 370 instruction simulation (just the channel I/O support).

The 3031 was essentially a 158 repackaged for outboard channel director (i.e. effectively a two processor 158 ... one dedicate for 370 instruction and one dedicated for channel i/o) ... with other misc. enhancements the 3031 was about 40% faster than a 158.

The 3032 was essentially a 168 repackaged with outboard channel director.

The 3033 started out as a 168/3032 wiring diagram mapped to newer chip technology that was about 20% faster than the 168 chip technology. The new chip also had about 10 as many circuits per chip than the 168 chip technology. The straight-forward remap started out with the 3033 being approx. 20% faster than the 3032. Various optimization then redesigned specific portions of the 3033 to take advantage of more intra-chip operations than just a straightforward inter-chip operations (compared to the straight 168/3032 remap).

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
Newsgroups: alt.folklore.computers
Date: Thu, 07 Dec 2000 15:54:04 GMT
Eric Fischer writes:
Interestingly, the on-off switch seems to have been used to switch between the user's own program and a debugger, and the push button was used to request longer time slices during display-oriented sections of a program.

CMS appears to have inherited blip command possibly from CTSS ... which with long-running program would wiggle the 2741 ball every two cpu seconds (did a upper/lower case shift w/o actually typing anything). this might be considered slightly similar to current day HASH in ftp. Also, in the translation from 2741 typeball to 24x80 character screens ... it was no longer able to wiggle the typeball so it translated into moving the cursor across the screen.

the original CP would up the user program's dispatching quanta anytime a user program did any sort of terminal i/o. as a result, various people ... who didn't feel a blip every two seconds of cpu wasn't sufficient would throw in terminal i/o possibly every 100 ms to try and increase their thruput.

it was one of the reasons that i had to do dynamic feedback fair-share scheduling. the change was that time-slice quanta for "interactive" was much smaller ... and the dispatching was proportionally smaller also (in theory two tasks ... one running consistently with small interactive time-slice and another task running consistently with background time-slice ... would actually accumulate CPU resources at the same rate ... all other things being equal). that minimized the motivation to throw in the gratuitous terminal i/o.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

IBM's mess (was: Re: What the hell is an MSX?)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mess (was: Re: What the hell is an MSX?)
Newsgroups: alt.folklore.computers
Date: Sat, 09 Dec 2000 16:32:10 GMT
Larry Anderson writes:
The reason IBM made it in the PC market was not because they made 'a good business machine'(HA!) , it was mainly because they had a microcomputer with the name "IBM". Business people didn't know what they were buying back then either; the only thing they had to go on was the axiom: 'you never get fired for buying IBM.' Such myths sold alot of lousy software at great prices too (just label it "business software" and make it sorta run on an IBM and you were rich.)

on the hardware side ... it had the ibm logo on it ... and the software side they encouraged lots of people to write software for it.

i've claimed that a large part of the success was combination of 1) ibm name, 2) spreedsheet (& misc. other business) software (mostly non-ibm), & 3) mainframe terminal emulator

a single keyboard/terminal on desktop could act as both local computing and also provide the necessary mainframe access for the rest of the ncessary business computing, a basic pc was only a little more expensive that an ibm mainframe terminal. a company might have 50k-100k to millions of mainframe terminals i.e. the ibm mainframe terminal market at the time was much larger install than personal computers ... upgrade each of the terminals created huge install base for the people writing business (or other kinds of) software.

In this market, the justification for personal computing on the desk wasn't the total cost of the pc ... just the incremental price difference between an ibm-brand mainframe terminal ... and an ibm-brand pc with host terminal emulation. At some point, the software base got large enuf that some businesses could start to justify the complete PC price for local computing (not just the incremental difference) ... i.e. along the way various market scale thresholds started to kick in.

i had this argument with some of the mac guys before the mac shipped (at the time, my brother was apple regional sales guy for a couple states and he would be periodically in town and we would get togehter for dinner with some of the mac people).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's mess (was: Re: What the hell is an MSX?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mess (was: Re: What the hell is an MSX?)
Newsgroups: alt.folklore.computers
Date: Sat, 09 Dec 2000 18:09:41 GMT
in the mid to late 70s ... mainframe terminals tended to be in terminal rooms and/or for data entry, call center, processing people ... there were a few scattered around on desk for professional programmers and engineers. authorizing a terminal on a person's desk was part of the annual budget planning and required VP-level sign-off authorization. there was starting to be some exceptions like the HONE system for all the branch and field people ... where it was becoming part of standard product handling process that terminal had to be used.

... misc refs:
https://www.garlic.com/~lynn/2000f.html#62

even the prospect of email, by itself wasn't enuf to break the barrier (i.e. the internal network was larger than the whole arpanet/internet up thru possibly sometime in 1985).

one friday night over lots of beer we were boundering on how to break some of this log jam. we needed a killer app to go with email and some business numbers. We came up with the online corporate telephone book and requirement that it had to be implemented in less than 2 person weeks time and take less than 1/2 of one person time to support and maintain it for the whole company. We also did some calculations that showed that the 3year depreciated cost of terminal was less than the monthly business telephone cost ... that every employee got on their desk w/o question.

shortly after that (combination of email, online corporate telephone book, and cost analysis) there was a 3 month period where the majority of new terminal allocation out of the annual budget disappeared onto middle management and executive desks. Shortly after that they eliminated the requirement that terminal authorization on individual desks required a VP signature.

as PCs started to become available with terminal emulation ... it was still possible to show that a 3 year depreciated cost for PC was still less than monthly business phone on people's desks. The idea that it was a single keyboard display ... and not a whole lot of keyboards & displays that all had to fit on the same desk was an important issue.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Mon, 11 Dec 2000 16:26:29 GMT
smetz@NSF.GOV (Metz, Seymour) writes:
Since I know of one shop that had three of them, I doubt that the correct number is 2. I'd guess that there were more sold than the 91 and 95 figures.

The 370/195 was not a true S/370; it was missing a few key features.


initially 370s (135, 145, 155, 165) were announced w/o virtual memory & relocation hardware. There were some misc. new instructions (which I seem to have forgotten at the moment, although i/o had SIOF, CLRCH, CLRDV, etc). One of the big things in s/370 ... that doesn't show in the POP was a lot of RAS & instruction retry.

My understanding was that a major difference between 360/195 & 370/195 was in the RAS area, instruction retry, etc (and it never got any of the virtual memory stuff).

I never actually worked on the 195 ... I did spend some time with the engineers tho ... they were looking at building a dual i-stream version. Under most codes, 195 had horrible problem with branches draining the pipeline. A little hardware ... extra PSW, extra set of registers, a one-bit i-stream tag for the pipeline, etc ... would allow supporting a dual i-stream (from software standpoint, a two-processor) machine ... which had a somewhat better chance of keeping the pipeline feed.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Mon, 11 Dec 2000 17:25:49 GMT
Anne & Lynn Wheeler writes:
initially 370s (135, 145, 155, 165) were announced w/o virtual memory & relocation hardware. There were some misc. new instructions (which I seem to have forgotten at the moment, although i/o had SIOF, CLRCH, CLRDV, etc). One of the big things in s/370 ... that doesn't show in the POP was a lot of RAS & instruction retry.

ok ... i went and dug out some old stuff from the boxes. I've found three gx20-1850-3 370 references cards (fourth edition, nov. 1976), a 360 greencard (gx20-1703-7) and two 360/67 blue cards (229-3174-0).

The 370 reference summary on the front lists:


conditional swapping                cds, cs
cpu timer & clock comparetor        sckc, spt, stckc, stpt
direct control                      rdd, wrd
dynamic address                     lra, ptlb, rrb, stnsm, stosm
input/output                        clrio, siof
multiprocessing                     sigp, spx, stap, stpx
psw key handling                    ipk, spka

...............

there were also the long instructions ... mvcl & clcl

& the byte instructions icm, clm, stcm

the trace & per stuff mc,

floating point extended

....................

many of the above used the B2 opcode and than the byte following B2 as the sub-opcode (adding another 256 possible instructions).

multiprocessing, dynamic address, and conditional swapping weren't part of the base, original 370.

the CP/67 "H" system provided virtual 370s running on 360/67. For original 370 this required intercepting the prg1s (op exception) and simulating. Prior to that, most of cp/67 had been involved in intercepting prg2s (priv. excep).

cds & cs also wasn't in the original 370. it was done at CSC as outgrowth of the fine-granualarity MP locking work. Ron Smith (one of the people that owned POP in POK) told us that we wouldn't be able to get a MP-specific instruction added and that we needed to come up with a paradigm that allowed the instruction to be used in a uniprocessor mode. That took a couple months and resulted in the programming notes on how to use cs/cds for managing stuff in non-priv. tasks. The original instruction definition was CAS (compare and swap) ... the person that did the original work initials were CAS and it took a couple months to come up with mnenomic that were his initials. That was then changed to CS & CDS to include both single word and double word.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's mess (was: Re: What the hell is an MSX?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mess (was: Re: What the hell is an MSX?)
Newsgroups: alt.folklore.computers
Date: Tue, 12 Dec 2000 00:18:39 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
...and (IMHO) the real icing on the cake for this was the "Yale ASCII IUP" which allowed async-connected terminals -- aka "users at home with a PC" -- to connect to the IBM mainframe via a normal (relatively) inexpensive modem, yet appear to the mainframe as if they were a local 3270.

and even before that ... when ibm came out with the 3101 (ascii screen device) and we got a lot of them internally as home terminals (predated ibm/pcs by a couple years) ... and the internal PVM (passthru virtual machine which allowed simulated local 3270 sessions to be handled anywhere on the internal network) was upgraded to support the 3101 block mode (had a choice between coming in directly as an ascii terminal or optional thru PVM support as simulated 3270).

Later when ibm pcs started appearing as home terminals, PVM 3270 simulator support was upgraded so that there was dictionary (index data already been transmitted), compression & some other transmission optimization schemes with the terminal simulater on the pc (attempting to mask some of the slowness of the 2400 baud modems).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Tue, 12 Dec 2000 00:34:18 GMT
b19141@ACHILLES.CTD.ANL.GOV (Barry Finkel) writes:
Eastern, Delta, and United all had 195s; I do not remember which airline(s) had more than one. NOAA in DC had a 195. Oak Ridge National Lab had a 95. I have a document "Fuctional Specs for the 360/92", which I think was never built, but which IBM bid in response to an Argonne RFP. --Barry Finkel

it was also analysis of the eastern 195 that was one of the major things (that i saw) that put the nails into the FS coffin ... aka if FS was implemented using better/faster technology than 195 and the eastern application re-implemented to run on that machine ... that the best FS thruput would be less than the eastern applications running on 370/145 (i.e. FS architecture caused at least a 10:1 performance penalty, ... at least for the eastern workload/application).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Could CDR-coding be on the way back?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could CDR-coding be on the way back?
Newsgroups: comp.lang.lisp,comp.arch
Date: Tue, 12 Dec 2000 16:58:31 GMT
Jan Ingvoldstad writes:
Today, a "full" Usenet feed is around 250 GB/day, and the yearly growth is around 200%. With two decades of continued growth at the same rate, that'll be 2503^20 gigabytes, or:

871 696 100 250 gigabytes


lets say that there is currently 6billion people ... then 250gb/day amounts to about 40bytes/person/day. say in 20 years there are twice the number of people or 12 billion people on the planet ... then the 871 696 100 250 gigabytes ... works out to around 7-8 million bytes per person/day.

already the majority of the bytes are not text ... but binary of one form or another. lets say that text is only 10% of the total ... and something is worked out to handle the binary in some other way. That would still say in 20 years that every person on the planet would have to be generating around 750,000 text characters per day; say 6 characters per word ... around 120,000 words/day ... say 300 words/page ... around 400 pages/person/day.

majority of the usenet growth has been binary with some increase due to more people involved. however, once every person in the world is spending all their time only generating material for usenet ... i would think that the growth would start to level off.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

360/370 instruction cycle time

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/370 instruction cycle time
Newsgroups: bit.listserv.ibm-main
Date: Tue, 12 Dec 2000 17:06:20 GMT
Chris_Blaicher@BMC.COM (Blaicher, Chris) writes:
None of the 370/xx5 boxes had DAT. They were the 370/xx8 boxes. 135 -> 138, 155 -> 158, 165 ->168

none of the 370/xx5 boxes were announced with DAT. The 135 & 145 had DAT but the machines were initially delivered with microcode load that didn't enable the DAT function (although there were all sorts of speculation what the "XLAT" light on the front panel of the 145 referred to.

when virtual memory was announced ... 135s & 145s had DAT enabled by a new microcode load. The 155 & 165 had to have hardware installed in the field to allow them to support virtual memory. There were lots of 155s & 165s field upgraded.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

No more innovation? Get serious

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: No more innovation?  Get serious
Newsgroups: alt.folklore.computers
Date: Wed, 13 Dec 2000 20:24:19 GMT
benc@krustbustr.hawaga.org.uk (Ben Clifford) writes:
>Doug Engelbart. I think Stanford University has the video online. I >searched, but it wanted registration. I watched those same videos a >couple of months ago too. Even though it was more than 30 years ago, an >early mouse driven gui still basically looks just like ones we use >today.

A friend of mine based his BSc project on making a fully keyboard operable GUI - the justification was something like: Englebart designed all this stuff for new users to use and didn't intend for it to be used in the same way by "power users".

The concept seemed to provoke brain overloads in the Human/Computer Interaction professors who were examining him.

The HCI group of that Computer Science department seem stuck on this mouse thing, and also on the idea that there are lots of "grown-ups" around who have not used computers before. This latter I think will become increasingly more irrelevant as little kiddies get exposed to computers more and earlier.


a lot of the GUIs typically try and provide specific context for the part-time and/or casual user (this has frequently equated to executives in the past).

many of these features have tended to obstruct the full-time/powerq user that carries the context in their head

the analogy is something like making a car that a five year old could drive the very first time they enter an automobile ... w/o requiring them to ever have had prior experience, training and/or practice in establish the mental context associated with driving a car .... i.e. high schools have tended to offer both driving classes and typing classes to give people sufficient skill base & context.

Even with all that ... indy race cars tend to be somewhat different than the run of the mill street car and are much more effective in the hands of a skilled and experienced user.

Frequently, the novice GUI argument ... is that it reduces everybody to the productivity of the lowest common denominator (experienced indy drivers are unable to operate a hypothetical gui indy car any more effectively than an untrained five year old).

From an economics standpoint ... there is at least 3-4 orders of magnitude more casual users in need of computer training wheels than there are professional power racer computing users. The corresponding market can be worth hundreds of billions as opposed to hundreds of millions ... so what if the productivity of the power racer computing user is reduced to that of the training wheel user.

as a total aside ... recently going thru some boxes that had been in storage looking for something else ... i ran across a whole bunch of stuff from doug (copies of early stuff, press releases, the whole augment description and documentation, etc). When M/D was buying tymshare ... I got called in to evaluate gnosis for them ... since that wasn't going to be part of the M/D operation ... and I also got to spend some time talking to doug since he also wasn't going to be part of the ongoing M/D operation.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM's mess

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mess
Newsgroups: alt.folklore.computers
Date: Wed, 13 Dec 2000 20:29:46 GMT
cbh@REMOVE_THIS.teabag.fsnet.co.uk (Chris Hedley) writes:
I was under the impression that the 3278 & 3279 were basically character mode terminals (well, ish; the keyboard had to be polled by the controller, IIRC) and it was actually the 3274 that did the block mode stuff. Could be wrong, though.

the transition from 3272 controller and 3277 display moved a lot of the intelligence from the keyboard/display back to the 3274 controller ... reducing the per unit manufacturing costs of the 3278 & 3279.

it also eliminated being able to make field engineering modifications to the terminal to improve the human factors (i.e. like FIFO box to mask keyboard locked problem and modifications to get the repeat, delay, and various cursor position features to operate like one would wish).

random refs:
https://www.garlic.com/~lynn/2000c.html#63

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A question for you old guys -- IBM 1130 information

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A question for you old guys -- IBM 1130 information
Newsgroups: bit.listserv.ibm-main
Date: Thu, 14 Dec 2000 00:27:17 GMT
i_am_bobf writes:
Here's a question for some of you who've been around for a while. Does anyone remember the IBM 1130? I've been trying to find someone, somewhere

i remember 1130. csc had a 2250mod4 ... i.e. 2250mod1,2,3 had control units that attach to 360 ... 22250mod4 was an 1130/2250 vector graphics combo.

somebody had ported spacewars to run on it (from pdp1?)

it was also somewhat the genesis of the internal network ... the first "network" link at csc was between the 360/67 and the 1130. the csc network support went on to grow into vnet, the internal network, bitnet, etc i.e. the internal network was larger the (whole) arpanet/internet until approx. 1985.

random refs:
https://www.garlic.com/~lynn/2000b.html#67
https://www.garlic.com/~lynn/2000e.html#32
https://www.garlic.com/~lynn/2000d.html#15
https://www.garlic.com/~lynn/97.html#2

... see the scamp (5100 pc) reference emulating 1130 allowing apl/1130 to run
http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/

page i just ran across doing alta-vista search
http://www.mindspring.com/~hshubs/1130/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SSL as model of security

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL as model of security
Newsgroups: comp.security.unix
Date: Thu, 14 Dec 2000 18:25:17 GMT
Jorge Kinoshita writes:
Hello,

I use Linux everyday and sometimes I think: SSL is very good and I do not know of any problem in this protocol. If everybody change ftp to sftp, telnet to ssh, etc. would not all (almost all) security problems have been solved? Am I wrong? For instance: would it be possible to have a security hole in sftpd, considering that the authentication is done first? Thanks.

Jorge Kinoshita.


you can view server SSL as having to parts ...

exchange of (random) secret key that is then used for session encryption

and

validating that the node that you think you are connecting to somehow relates to the node information listed in the server's digital certificate.

in the past there have been various problems with the method that some SSL implementation code used to generate the secret key, the number of bits of the key and/or the symmetric encryption algorithm used for the session encryption. there have also been issues during key & algorithm negotiation phase where man-in-middle attack could inject something that resulted in the negotiations downgrading from 128-bit to 40-bit.

independent of the symmetric encryption area of SSL is the part of the protocol dealing with ... are you really talking to the node that you think you are talking to. that gets into does the node information listed in the certificate mean anything and whats the process by which an organization can acquire an acceptable certificate that carries specific node information (i.e. you could have the strongest symmetric encryption algorithm in the world with a 512bit key to protect session data in flight and still be giving all your information to the bad guys).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Who Owns the HyperLink?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Who Owns the HyperLink?
Newsgroups: alt.folklore.computers
Date: Thu, 14 Dec 2000 21:29:26 GMT
"David C. Barber" writes:
British Telecom claims to own a patent on the hyperlink, and hence the main user interface on the world wide web. Now they've started suing to collect royalties -- http://www.theregister.co.uk/content/6/15485.html . Anyone have reason to think they're full of hot air here?

David Barber


it references a patent filed in 1976 and granted in 1989.

engelbart's augment work or nelson's hypertext work may or may not qualify as prior art.

NLS/augment had hypermedia features (is there overlap with hyperlink ... can you have hyper media/text w/o hyperlinks?)
http://www3.zdnet.com/yil/content/mag/9611/hyper/foot4.html
http://www.cc.gatech.edu/classes/cs6751_97_fall/projects/ms-squared/engelbart.html
http://scis.nova.edu/~speranor/DCTE790Assignment2-HTML.htm
https://web.archive.org/web/20010219211434/http://scis.nova.edu/~speranor/DCTE790Assignment2-HTML.htm

nelson & hypertext
http://hoshi.cic.sfu.ca/~guay/Paradigm/Nelson.html
https://web.archive.org/web/20010406064423/http://hoshi.cic.sfu.ca/~guay/Paradigm/Nelson.html
http://aisr.lib.tju.edu/~murray/internet/sld041.htm
http://www.sfc.keio.ac.jp/~ted/XU/XuPageKeio.html
https://web.archive.org/web/20010411074655/http://www.sfc.keio.ac.jp/~ted/XU/XuPageKeio.html
http://www.sun.com/950523/columns/alertbox/history.html
https://web.archive.org/web/20010401024947/http://www.sun.com/950523/columns/alertbox/history.html

from one of the above ..
Ted Nelson

Ted Nelson's contribution to the development of hypertext and hypermedia are profound, extending even to the coining of the terms hypertext and hypermedia. Since 1960, he has been developing a comprehensive paradigm for the implementation of a distributed hypermedia system that covered the full spectrum of issues; from the algorithms to the economics. The result of this paradigm is the ongoing Xanadu project.

The purpose of Xanadu is to establish Nelson's vision of the Docuverse. Docuverse is the term he coined to describe a global online library containing, in hypermedia format, all of humanity's literature. This concept of the Docuverse is on of the foundational paradigms of the WEB.

Amazingly, he had a prototype running in 1965. This prototype modeled many of the concepts that make up any hypermedia system, including the WEB.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Could CDR-coding be on the way back?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could CDR-coding be on the way back?
Newsgroups: comp.lang.lisp,comp.arch
Date: Thu, 14 Dec 2000 23:56:53 GMT
Nicholas Geovanis writes:
I should mention that due to previous architecture/design choices, not all of the older 16MB address space was available to the OS and/or DBMS (and/or application) for exclusive, private use. Thus the effective available virtual storage was less than 16MB, sometimes considerably less, dependent on local OS changes as well. Even more reason to enlarge it and to permit access to multiple virtual address spaces.

Any IBM folks want to chime-in here? Are all of the System/370 hands extinct already? I haven't touched a 'frame in 9 years (don't miss them either :-))


a major MVS issue (in 24bit addressing land) was that it had inherited from SVS, MVT, PCP (back to early '60s), etc a paradigm/design where the supervisor occupied the same address space as the application programs. This was slightly mitigated in the SVS->MVS (in the early '70s) where the restriction that all applications programs and all supervisor functions occupy the same 24-bit address space was slightly lifted (single virtual storage ... became multiple virtual storage ... where there was a 16mbyte address space for each application ... with the MVS supervisor and various subsystem components residing in each one).

However, by the late-70s having all supervisor functions in the same address space as the application along with various supervisor subsystem requirements were again starting to serverely strees the 24bit address limit.

while some of the MVS gurus might have believed that they were the biggest address space hogs in the world, some MVS installations were having difficulty leaving even 6mbytes (of the 16mbytes) available to application program. There were actual applications in the '70s that demonstrated large address space appetites. Some of these were large database transaction subsystems that had to exist in the tiny space left in the 16mbytes space after the MVS supervisor and subsystem requirements were met.

In the initial port of apl/360 to cms/apl ... the apl workspace limited was opened up from typically 32k-64k bytes to just under 16mbytes. There were a number of applications that actually took advanage of the increased workspace size.

One of those were the HONE service. This was the service in the '70s and '80s that supported world-wide sales, marketing, hdqtrs, and field support operations. One example, starting sometime in the mid '70s, IBM mainframe orders became so complex that it couldn't be done manually, a HONE application was needed to fill-in the order. Another big use of HONE was for economic planning & modeling ... much of the "what-if" processes done today on PC spreadsheets were performed in APL.

In '77, the US HONE operations were consolidated in a single location in california with, what was at the time believed to be the largest single-system image operation in the world (cluster of SMP processors sharing large disk farm). In '78/'79, the single-system image was replicated in dallas and boulder providing disaster survivability support (in case of national disaster, like an earthquake in cal.). This was in addition to various HONE clones that resided in numerous countries around the world.

Almost the entire HONE "experience" was delivered first on cms/apl and then later on apl/cms.

random refs:
https://www.garlic.com/~lynn/2000f.html#62
https://www.garlic.com/~lynn/2000f.html#30
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Could CDR-coding be on the way back?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could CDR-coding be on the way back?
Newsgroups: comp.lang.lisp,comp.arch
Date: Fri, 15 Dec 2000 00:44:38 GMT
... oh yes, and a somewhat separate issue for the 370 ... in addition to virtual 24bit address was the issue of real 24bit address. With sufficient concurrent applications it was possible to start to stress the 16mbyte real storage limits (and possibly precipitate page thrashing).

many 3033 shops in late '70s (basically a souped up 370/168) started seeing these real-storage constraint problems.

the issue was how to address the problem.

CCW (i.e. i/o transfer) already supported 31bit real address with IDALs.

To get some slight relief, the 3033 introduced a hack for getting more than 16mbyte real storage. The 370 pagetable entry is 16bits, 12bits for specifying real page number (when combined with 12bit, 4k pages, yields 24bit addressing), an invalid bit, a protection bit, and two unused bits ... something like

NNNNNNNNNNNNPxxI

where "xx" are the unused/undefined bits. The 3033 hack was to use the two "xx" bits and prefix them to the 12bit page number to allow addressing up to 2*14 real pages ... or 2*26, 64mbytes of real storage. Executable code was still limited to 24bit virtual addresses but it was possible to allocate virtual pages in real storage above the 24bit line by setting the appropriate bits in the page table entry. And of course, the standard 370 CCW IDALs already had 31bits available for addressing real storage in i/o operations.

cross-memory services was also introduced with the 3033. in an attempt to help get some of the supervisor subsystem functions out of the same address space as the application (at least get things to the point where maybe a whole virtual 8mbytes was available to applications) ... and not to have a significant downside impact on the MVS "address-pointer" parameter paradigm, these supervisor subsystem functions had to reside in their own address space while still directly supporting services requiring addressing of the application virtual address space. cross-memory services introduced new addressing modes allowing instructions to address virtual storage different than the virtual address space that they were executing in.

random refs:
https://www.garlic.com/~lynn/99.html#99
https://www.garlic.com/~lynn/99.html#7
https://www.garlic.com/~lynn/2000e.html#57
https://www.garlic.com/~lynn/2000g.html#11

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Could CDR-coding be on the way back?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could CDR-coding be on the way back?
Newsgroups: comp.lang.lisp,comp.arch
Date: Fri, 15 Dec 2000 01:01:48 GMT
Anne & Lynn Wheeler writes:

https://www.garlic.com/~lynn/99.html#99


oops finger slip, that should have been:
https://www.garlic.com/~lynn/99.html#190

giving the 3033 announce & ship dates

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Could CDR-coding be on the way back?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could CDR-coding be on the way back?
Newsgroups: comp.lang.lisp,comp.arch
Date: Fri, 15 Dec 2000 20:12:58 GMT
pekka@harlequin.co.uk (Pekka P. Pirinen) writes:
As long as the array code is not interrupted by GC, leaving a pointer to the data array on the stack/registers. Furthermore, having a data array that the collector cannot parse without making reference to an array header makes some barrier techniques harder, for example those based on VM page protection. I wouldn't recommend it.

for the port of apl/360 to cms/apl ... this was one of the problems that we had run into. going from essentially a swapping 32kbyte-64kbyte workspace environment to a demand paging 16mbyte workspace environment ... the original apl had the propensity that every assignment allocated new space (and prior location ignored). when end of workspace was reached, garbage collection would compact all allocated space and the process started over.

one of the other projects at csc was vs/repack ... which included page reference monitoring and drawing "pictures" with special TN train of the operation. At times the halls of 4th floor, 545 tech. sq were papered with these reference pictures. basically five foot long section on reversed green-bar paper with storage accesses along the length and time along the width. A whole series of these were taped to the wall (length running vertical) giving several seconds of program execution as you moved down the hall.

The bottom of the wall tended to be solid pattern of use ... but there was a very strong sawtooth pattern where pages were touched, used for very short periods and then storage allocation moved up until it reached top of storage ... and then there was a solid line as all allocated data was compacted back down to low storage. In virtual memory environment, this tended to result in an application using maximum amount (all) of available virtual memory regardless of the application size or complexity ... with bimodel reference pattern ... partially LRU and partially MRU (i.e. the least recently used page wasn't likely to be used again for awhile).

Part of the less obvious things that we had to do in the '71 time-frame for the port of apl/360 to cms/apl was a dynamically adaptive memory collector.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

stupid user stories

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: stupid user stories
Newsgroups: alt.folklore.computers
Date: Mon, 18 Dec 2000 14:31:31 GMT
"Paul Grayson" writes:
I do recall seing a photo of a mouse with around 20 buttons around 10 years ago. I've no idea if this was a joke.

around '80 one of the corporate human factors groups had built a "cord-keyboard" half-spherical palm-size device where the fingers fit into indentions ... at the finger tips were rocker sensors/keys with four(?) possible "down" positions. it could be moved around on a flat surface as well as typing with the finger-tip sensors. some people claimed well in excess of 80 "words"/minutes

most of the other cord keyboards i've seen look more like flat piano keys and the device isn't designed to move. It has been a while ... but I believe that I remember seeing ("flat") cord keyboard used with the engelbart/augment system at tymshare (I would have to go back and leaf thru the documentation).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Multitasking and resource sharing

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multitasking and resource sharing
Newsgroups: bit.listserv.ibm-main
Date: Mon, 18 Dec 2000 20:55:33 GMT
edjaffe@PHOENIXSOFTWARE.COM (Edward E. Jaffe) writes:
It looks as if you're "reinventing the wheel" here. For as long as I can remember, the principles of operation has had sample code to do exactly what you want to do. Appendix "A" contains the routines entitled, "Lock/Unlock with FIFO Queuing for Contentions" and "Free Pool Manipulation". For over ten years, we have been using a combination of those two routines in a generalized "lock manager" that provides suspend lock services to nearly all of our mainframe products. I can assure you it works perfectly, no matter how many CPUs there are and no matter what language the calling routine is written in.

the programming notes ... on how to use CS/CDS for effiectively both uniprocessor & multiprocessor operation (i.e. application programming w/o going disabled for interrupts) was a requirement on getting compare&swap into the 370 architecture at all (i.e. the cs/cds programming notes magically appeared in pop at the same time cs/cds appeared). otherwise they said that cs/cds would never get into the architecture & machines.

random ref:
https://www.garlic.com/~lynn/97.html#19
https://www.garlic.com/~lynn/99.html#176
https://www.garlic.com/~lynn/2000g.html#16

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

does CA need the proof of acceptance of key binding ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: does CA need the proof of acceptance of key binding ?
Newsgroups: sci.crypt
Date: Tue, 19 Dec 2000 21:17:57 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
I happened to have today a very short conversation with a person who apparently knows quite a lot about SET. She said that a bank has no absolute assurance that a claimed public key of its customer is indeed genuine (for there is no verification of identity). So the initial character of SET appears rather questionable to me. Maybe experts of the group would like to comment on this. Thanks.

there are all sorts of boundary & discontinuity conditions.

a person has an account number.

they need to get some bank characteristic and public key bound together into a certificate. for the most part, that feature is not being supported by the bank where the account is located.

a transaction is sign with a public key, the transaction is also (at least) partially encrypted (especially the account number), the transaction, an appended digital signature and a certificate is transmitted to a merchant over the internet.

the merchant then does some stuff and transmits the transaction, the digital signature and the certificate to an internet gateway.

the internet gateway uses the public key in the certificate to verify the digital signature.

the internet gateway then generates an (iso) 8583 transaction turning on the authenticated signature flag.

the transaction is then handled in the normal way.

.........

couple of notes ...

this is still a standard account number ... i.e. effectively a shared-secret that can also be used in vanilla, non-authenticated transactions ... as a result there is significant exposure storing it when it isn't "at rest" (i.e. both SET and SSL encrypt the number while "in flight" ... but the big exposure when it leaves internet transmission and actually is used in standard processes).

for retail transactions, identity is a serious privacy issue. authentication wants to know that the entity authorized to execute transaction against an account is the entity executing the transactions. divulging identity (aka an identity certificate) for retail transactions represents a serious invasion of privacy ... and I believe is also counter to current EU regulations (stating that all retail transactions need to be as anonomous as cash ... aka even name on existing cards and/or magstripes represents a serious privacy problem). An identity certificate could represent even more serious privacy problem that existing name embossed cards.

a couple years ago, one of the credit card associations presented numbers at an ISO meeting of 8583 transactions flowing through the network where the authenticated signature flag had been turned on ... but they knew there was no digital signature technology involved.

a couple sizings of standard SET certificates have put them in range of 4kbytes to 12kbytes. a typical 8583 transaction is 60-100 bytes and aggregate peak transaction loads can hit several thousand per second. Actually flowing the SET certificate end-to-end on any real volume of transactions represents a serious capacity problem (serious transaction size bloat).

and of course, then there is X9.59 ... standard from the X9 financial standards body ... the task given the X9A10 work group for X9.59 was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions (credit, debit, card, ach, check, etc). It has passed X9 and is entering its two year trial period.

X9.59 specifies an account number that can only be used in authenticated transactions (eliminating the problem with the account number being a shared-secret representing significant fraud exposure). X9.59 also recommends end-to-end authenticated transactions (i.e. from beginning, entry-level security principles, i.e. the party responsible for authorizing and executing the transaction is also responsible for authenticating the transactions).

misc refs to x9.59 work
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

does CA need the proof of acceptance of key binding ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: does CA need the proof of acceptance of key binding ?
Newsgroups: sci.crypt
Date: Wed, 20 Dec 2000 15:04:41 GMT
cwsheng writes:
how's the law interpret this incomplete system?

current debit/atm magstripe card typically has a 4-digit shared-secret PIN .... frequently mailed to the person separately by their bank when the get a new card. in some of the digital signature laws with respect to the retail & financial arena ... it has involved, in case of dispute, attempting to change the burden of proof from the merchant/bank to the consumer ... focusing on how secure the mathematics are ... and somewhat ignoring problems in all the other business processes (as far as i know, such provisions have yet to be passed).

that is ignoring the previously mentioned issue, in the retail transaction case, that real identity proofing represents a serious invasion of privacy (i.e. any requirement to use a real consumer identity certificate in consumer retail transaction)

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

does CA need the proof of acceptance of key binding ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: does CA need the proof of acceptance of key binding ?
Newsgroups: sci.crypt
Date: Fri, 22 Dec 2000 04:08:12 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
Dumb question: Is it that difficult/impractical for a customer to deposit a public key at his bank and be able to sign all kinds of stuffs, thus avoiding the sort of problems discussed in this thread?

today most people can go to their bank and "re-key" their magstripe debit card with the PIN of their choice. in principle, recording a public key shouldn't be anymore difficult.

interesting challenge is to make it even easier ... comparable to "card activation" when receiving a new card but instead of audio/response telephone call, let's say an online bank website with an SSL session connection that then requires some series of challenge/response about things that both you and the bank know, including things like name, address, account number, various things from statements, and misc. other details.

Depending on the number & quality of these interactions, the bank can assign a risk value associated with the registering of a public key; in theory the larger the number &/or better the qualify of interactions the closer the risk assignment approaches an "in person visit".

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

stupid user stories

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: stupid user stories
Newsgroups: alt.folklore.computers
Date: Fri, 22 Dec 2000 12:46:25 GMT
Eric Fischer writes:
If I'm remembering right, in the movie Terminator 2 when you see code scrolling by inside the mind of the Terminator, it's a genuine COBOL program. (And in the first Terminator movie, a disassembly of some 6502 machine language, reputedly a routine to relocate Apple DOS into the Language Card.)

in the mid-80s on a visit to the madrid science center ... they were doing a project with the university ... digitizing a lot of stuff as part of getting ready for 1492 anniversity

while there I went to a movie in downtown madrid, in a addition to the movie they had a 15 minute short ... produced at the univerisity ... a very surrelistic thing that I didn't completely follow ... but prominent was a wall of tv sets which were all scrolling the same text at 1200 baud. imagine my astonisment when i recognized a vm/370 kernel "load map" ... what's worse I could tell the year & month of the kernel build.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

stupid user stories

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: stupid user stories
Newsgroups: alt.folklore.computers
Date: Fri, 22 Dec 2000 12:48:22 GMT
Anne & Lynn Wheeler writes:
"load map" ... what's worse I could tell the year & month of the kernel build.

... based on the list of what fixes were present

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Fri, 22 Dec 2000 18:12:02 GMT
"Bill Todd" writes:
Unfortunately, while that approach helps otherwise unintelligent large sequential I/O patterns, it doesn't optimize them (since the data rate obtained is only about half what it could be) and also approximately halves the random I/O rate for operations that could easily be satisfied with far smaller data transfers. So using smaller transfers (e.g., 64 KB or smaller as I suggested elsewhere) and clustering them such that sequential patterns can bundle multiple such units into a single transfer (or concurrently-queued multiple transfers that get to the platters without wasted motion) wins on both fronts, since the bundle may be larger than the compromise value you'd choose. Clustered paging operations use this principle, at least on the write side (and so could certain kinds of file systems for some write activity, though I'm not sure if any do, save for log-structured ones).

some of the ibm mainframe systems going back to the early '80s supported clustered page i/o for both read and write. basically a cluster was rebuilt on outgoing and the whole cluster was brought back in on incoming. basically something akin to working set was partitioned into cluster sizes on outgoing ... which somewhat improved the probability that pages that tended to be used at the same time were in the same cluster. at the time it was 4k pages and 10page clusters.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Could CDR-coding be on the way back?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could CDR-coding be on the way back?
Newsgroups: comp.arch
Date: Sun, 24 Dec 2000 20:34:36 GMT
Del Cecchi writes:
Here is my suggestion: Rather than the wasteful anarchy that we have today, we could designate one site as the "master" for each newsgroup. All posts would go the master. From there they would propagate through a tree of "shadows". The user could read from a nearby shadow, or register at that shadow to have posts emailed to them. Perhaps masters would be set up by major divisions, like comp, rec, etc. If a post has expired at the shadow it could be retrieved from another shadow or the master because it would have a time/date stamp from the master. The master could even archive old posts.

{for those ibmers out there -- :-) }

del cecchi


the original listserv ... started out as TOOLS in the late '70s (about same time as usenet) with master/slave dataserver support and then expanded into mailing lists & conferencing. mailing lists were either post at a time or accumulated. distribution from the master to the slaves was esstentially the same mechanism for distributing to individuals.

probably the highest traffic hierarchy on the internal network in the early to mid '80s was IBMPC (as an aside, internal network was larger than all of internet/arpanet until around '85).

a flavor of it was shared with bitnet members ... somewhat leading to listserv on the general internet today (and some forums currently gatewayed to usenet under bit. hierarchy).

random refs:
https://www.garlic.com/~lynn/94.html#33b
https://www.garlic.com/~lynn/99.html#24

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

No more innovation? Get serious

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: No more innovation?  Get serious
Newsgroups: alt.folklore.computers
Date: Mon, 25 Dec 2000 16:57:01 GMT
benc@krustbustr.hawaga.org.uk (Ben Clifford) writes:
The "seamless integration of multiple servers" on the WWW is at least half of what made it great - running on a single system is just a hypertext system.

ran across an old reference regarding presentation at a advanced technology conference that I put on in March of 1982.

random ref
https://www.garlic.com/~lynn/94.html#22

Cary was flying up to Provo from San Jose about once a week; the DataHub project had a contract with a small group in Provo to write some of the DataHub code. Eventually, GPD canceled the project and let the group in Provo have all the rights. I believe this was the genesis of a (PC) network company that is still around in the Provo area.
----------------------------------
An Overview of the DataHub Project

by: Cary WR Campbell, GPD Advanced Systems Technology

ABSTRACT

DataHub is a prototype repository which provides highly-reliable storage of shared and private data objects for project-oriented workstations connected by a local area network (LAN).

This presentation discusses an emerging project-oriented computing environment and outlines the DataHub Project objectives and plans.

Among the key ad tech areas investigated are:

tying DASD and LANs together sharing of programs and data among non-cooperating users non-stop operation high-level design language multi-microprocessor (off-the-shelf) design and packaging controlled user experiments, instrumented for productivity measurements


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Egghead cracked, MS IIS again

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Egghead cracked, MS IIS again
Newsgroups: comp.security.unix,comp.os.ms-windows.nt.admin.security,comp.security.misc
Date: Wed, 27 Dec 2000 16:50:08 GMT
Michael Erskine writes:
The problem is that we (as engineers and computer scientists) fail to look at the global scope of the problem. It is our nature, we want to fix the specific issue we have been assigned to consider. What we fail to do is grasp the idea that there are others, whose purpose is to discover every flaw, every weakness, and to exploit that not today, but at the time and place where it can do the MOST harm and yeild the MOST advantage. Because of where we are today, and where we will be a year from today, I would recommend those with a passing interest in security (and who enjoy a real tough challenge) dive into the issue with their whole intent. Learn everything you can and jump on that wagon with both feet. You will be needed because...

note that X9.59 has passed and is on its way to ANSI for 60 day public comment period. The X9A10 group was given the task of preserving the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions. It has basically done this by defining a standard for 1) authenticated transactions (using digital signatures) and 2) flagging "authenticated" account numbers as not valid in non-authenticated transactions

The net result in x9.59 is that it is not possible to copy down an account number and use it in another (non-authenticated) transaction.

disclaimer: I participated in the definition of x9.59 as a member of the x9a10 work group.

random refs:
https://www.garlic.com/~lynn/aepay2.htm#privrules
https://www.garlic.com/~lynn/2000g.html#5
https://www.garlic.com/~lynn/aadsm3.htm#cstech8

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Wed, 27 Dec 2000 21:08:42 GMT
ttonino writes:
Seagate had a Barracuda model with a special head assembly that would read (and write, I assume) two tracks at the same time. But modern disk platters do not have embedded servo information for nothing.

in the mid-60s, IBM had a 2303 fixed-head drive ("drum" with one head per track) and about 4mbyte capacity. There was a version of the 2303 called the 2301 which read/wrote 4 heads in parallel (system transfer rate about 1.2mbyte/sec).

In '70, the follow-on device was a 2305 fixed-head drive that did 3mbyte/sec transfer and had about 12mbyte capacity. A special high-performance version of the 2305 had dual-set of fixed-heads off-set 180 degrees. It read/wrote from either set of heads (cutting avg. rotational delay in half) ... but didn't do system transfer simultaneously since it was already operating at max. system transfer.

I believe by '80, disks started having servo-feedback on a per-platter basis. It started being easier to have multiple heads on the same arm, doing transfer with multiple parallel tracks on the same platter than trying to do simultaneous transfer from different platters using different heads on the same servo mechanism.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Thu, 28 Dec 2000 17:03:43 GMT
Paul Repacholi writes:
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:

Does any major OS, other than Windows (yeah, I know, that one's pretty major :) ) care that much about 512 byte disk blocks? Does Win2K even

VMS. If it's not 512B/block , it's not a disk. Fudged by some old drives where the driver reads 2 or 4 blocks at a time.


note that there are have been some vendor systems that have implicit dependencies on 512bytes ... for instance in the event of a power failure ... what is the granularity of the operation to the disk. some systems have made filesystem implementations based on the fact that in case of a power failure, a write will either complete correctly or not complete correctly. A filesystem with metadata block size the same as disk record size might not directly translate to larger block size implemented with clusters of disk records.

in the past there have been some vendor system configurations where a power failure might mean there was sufficient power in a disk drive to correctly write a full record with correct ECC but not necessarily enuf power to transmit the full record from processor memory, i.e the disk drive would supply zeros for the missing part of the record and then write a correct ecc. The problem was that system was expecting the disk drive to indicate an error in the case of incorrect or incomplete write. This particular failure mode resulted in parts of filesystem metadata being inconsistent and no disk error indication. Similar inconsistency might occur when filesystem integrity is dependent on disk I/O error indication and the metadata block unit changes from the same as the disk record size to a multiple of the disk record size.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Thu, 28 Dec 2000 18:34:36 GMT
Andi Kleen writes:
This failure mode has been plaguing modern Linux too, overwriting the inode table on disks with zeroes or even garbage (disk not intelligent enough to supply zeros) when the power was cut in the wrong moment. Even inodes that were not touched in months were destroyed this way, because they just happened to reside on the wrong block. Cheaper IDE systems seem to just write garbage.

problem dates from at least the 60s ... first time I worked on filesystem tolerance for problem was late 70s.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Thu, 28 Dec 2000 21:10:01 GMT
Anne & Lynn Wheeler writes:
In '70, the follow-on device was a 2305 fixed-head drive that did 3mbyte/sec transfer and had about 12mbyte capacity. A special high-performance version of the 2305 had dual-set of fixed-heads off-set 180 degrees. It read/wrote from either set of heads (cutting avg. rotational delay in half) ... but didn't do system transfer simultaneously since it was already operating at max. system transfer.

the other characteristic introduced by the 2305 was possibility of executing out-of-order requests.

the nominal ibm mainframe i/o process was to have a single i/o request sequence per device. nominal operation involved the device singling the processor when a request had finished, the processor taking an I/O interrupt, performing some processing on the finished request and then redriving the device with the next request in the queue. This processor device redrive latency could result in device idle, missing rotational position, and reduced device I/O thruput.

the 2305 fixed head introduced multiple concurrent request queues (in mainframe processor I/O architecture, the 2305 looked like 8 independent devices each with its own request queue). the processor could schedule eight independent requests concurrently and the 2305 could optimize the order ot request execution. Furthermore, processor device redrive latency was masked since there could be seven queued requests active at the point when any one request completed.

later in the '70s, the 3350 moveable head disk was introduced ... about 640mbytes capacity. It came with a fixed-head option ... where there were extra heads covering a portion of the platter area. The problem with the 3350 fixed area was that while it wasn't necessary to actually move the arm to read/write the data, the 3350 only supported the standard device, single request queue ... i.e. if the device was already involved in an i/o operation moving the disk arm, it was not possible to concurrently transfer data from the fixed head area.

I tried to sponsor a business case where the 3350 fixed-head feature would be enhanced to support multiple request queeing similar to the 3350 (primarly based on enhanced system paging performance). Unfortunately, the business case got squelched by a different product group that felt it was developing a dedicated device solely for enhanced system paging performance (as opposed to an incremental 3350 feature allowing 3350 to be used for both standard system data and enhanced system paging performance)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A new "Remember when?" period happening right now

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A new "Remember when?" period happening right now
Newsgroups: alt.folklore.computers
Date: Thu, 28 Dec 2000 23:41:42 GMT
rmonagha@smu.edu (Robert Monaghan) writes:
my first college programming course (no credit) was in APL, at Yale's Watson center in 1969-70, from some of the language developers and proponents. Rather downhill from there, with fortran courses touting various routines, with 400+ lines needed to replace one line of APL; I was rarely impressed by the claimed "power" of these alternatives ;-) ;-) but the special typewriter/display and interpreter costs killed APL; I still have my IBM 5100 APL "portable" computer with docs and tapes ;-)

random old apl postings/refs:
https://www.garlic.com/~lynn/2000g.html#27
https://www.garlic.com/~lynn/2000g.html#24
https://www.garlic.com/~lynn/2000g.html#30
https://www.garlic.com/~lynn/2000.html#69
https://www.garlic.com/~lynn/2000.html#70
https://www.garlic.com/~lynn/2000d.html#15
https://www.garlic.com/~lynn/2000f.html#6
https://www.garlic.com/~lynn/2000f.html#26
https://www.garlic.com/~lynn/2000f.html#57
https://www.garlic.com/~lynn/2000c.html#49
https://www.garlic.com/~lynn/99.html#20
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#90
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/94.html#7

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
Newsgroups: comp.arch
Date: Thu, 28 Dec 2000 23:54:58 GMT
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
How did you solve the problem? I would think that if you put a simple software checksum on the end of blocks containing inode and other fs structural information, you'd be fairly resistant to the problem. If the root cause is drives writing only the first part (or none) of the sector correctly but with a valid ECC code, you'd get an invalid checksum when you later tried to read the block. Then your filesystem (or more likely, your fsck or in-built structural recovery/repair process) would know to look to one of the many copies filesystems keep of this important information.

yes ... for this particular situation ... really critical metadata had pairs of locations which the system would write alternating. putting version number & error checking at end of the record ... then on recovery (if it was possible to read both sets of records w/o error), then valid records with the most-recent version number was used.

original design had version number at start of record with alternating records ... which just handled the failure mode involving power-failure during write and valid ECC was not written (reading during recovery resulted in error) .... but didn't handle power-failure where zeros were propagated thru the end of record and correct ECC was written.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Use of SET?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Use of SET?
Newsgroups: alt.security,comp.security.misc
Date: Fri, 29 Dec 2000 15:26:46 GMT
Neil J Cummins writes:
Hi

I'm currently carrying out some research into Security and eCommerce as a part of my Masters. I've been looking at Secure Electronic Transactions (SET), which appears to offer a good way forward for both consumers and vendors. Unfortunately as far as I can tell its take up has been very limited, unless you live in Scandinavia, which is surprising given its support by VISA and Mastercard.

Does anyone have any further info on the rollout of SET and when it is likely to be in widespread use, or is it being quietly forgotten in favour of SSL?

TIA

-- Neil J Cummins


news article from earlier this year
Visa Delicately Gives Hook to SET Standard
Wednesday, June 21, 2000
by Jennifer Kingson Bloom

Amid the hubbub of the Visa-MasterCard antitrust trial, the world paid little or no attention to the fact that Visa on Monday officially gave up trying to implement SET, the Secure Electronic Transaction specification for Internet payments, in the United States.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Use of SET?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Use of SET?
Newsgroups: alt.security,comp.security.misc
Date: Fri, 29 Dec 2000 16:01:03 GMT
Neil J Cummins writes:
Does anyone have any further info on the rollout of SET and when it is likely to be in widespread use, or is it being quietly forgotten in favour of SSL?

also SSL (nor SET) addresses the problem of account numbers being essentailly shared-secrets ... misc. refs:
https://www.garlic.com/~lynn/2000g.html#41
https://www.garlic.com/~lynn/aepay4.htm#3dssl

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Egghead cracked, MS IIS again

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Egghead cracked, MS IIS again
Newsgroups: comp.security.unix,comp.os.ms-windows.nt.admin.security,comp.security.misc
Date: Fri, 29 Dec 2000 21:50:52 GMT
Michael Erskine writes:
Establishing a secure eCommerce facility is vastly more difficult than merely setting up a secure web server, encrypting data, and going into business. We have endemic problems on the network itself. We need secure name resolution, we don't have it. We need secure OS's, we don't have them. We need users who have a clue, they are very rare. We need applications that are not shot full of potential buffer overflows, which simply have not been identified. We don't have them. We need good proactive heuristic Intrusion Detection Systems, we don't have them.

What has happened is simple. We have taken a technology (which is truly in it's infancy) and tried to build a commercial communications infrastructure around it. We have done that with hobbiests, Ma n' Pop ISP's, and a small percentage of real communications professionals. We have allowed marketing decisions to out weigh security and engineering decisions and have built an eCommerce system on TOP of a very reliable communications model, which has no intrinsic security. We (well the engineers) are now building patches into the system in a foot race against those who would entrench themselves so deeply in the existing system that they may never be rooted out.


I think I gave a talk about some of this at ISI a couple years ago to the IETF and electronic commerce groups and some number of graduate students from USC. Some examples were similar to problems I had worked on 25-30 years earlier.

There are a number of problems of mapping to an infrastructure that spent most of its years not worrying a lot about commercial hardening issues (in general, security can be treated as a subset of the generalized failure mode problem).

Another failing was that some took message protocols that had an implicit design point of circuit based network and did a straight port to a packet network ... w/o even giving a thot to SLAs (service level agreement) and diagnostic procedures that went with the circuit based networks (completely independent of the issues of closed circuit-based and "open" packet-based). For instance, try getting a SLA for 4-nines to 5-nines availability for end-to-end from your ISP to some random other entity using some other ISP.

random refs:
https://www.garlic.com/~lynn/aadsmore.htm#dctriv
https://www.garlic.com/~lynn/99.html#49
https://www.garlic.com/~lynn/99.html#48
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/99.html#163
https://www.garlic.com/~lynn/99.html#219
https://www.garlic.com/~lynn/99.html#224
https://www.garlic.com/~lynn/rfcietff.htm
https://www.garlic.com/~lynn/aadsmore.htm#client3
https://www.garlic.com/~lynn/aadsmore.htm#setjava

in general, the ARPANET/Internet has been around just about as long as the internal network. While the internal network was larger than the whole arpanet/internet until sometime in '85 ... there was also a lot more attention given to commercial hardening issues and detailed failure-mode analysis related to the internal network's operation.

while not directly network related ... another contrast of commercial hardening issues vis-a-vis not commercial hardening (this time with respect to disk drives and filesystems). The following is related to what happens if there happens to be a power-failure at just the moment that critical filesystem write occurs.
https://www.garlic.com/~lynn/2000g.html#43
https://www.garlic.com/~lynn/2000g.html#44
https://www.garlic.com/~lynn/2000g.html#47

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

> 512 byte disk blocks (was: 4M pages are a bad idea)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: > 512 byte disk blocks (was: 4M pages are a bad idea)
Newsgroups: comp.arch
Date: Fri, 29 Dec 2000 22:14:20 GMT
Brian Inglis writes:
IIRC MVS would not work with any Fixed Block Architecture (FBA-512B) drive (3310?, 3370) but required Count-Key-Data (CKD) drives to perform VTOC lookups during IPL.

CKD was a 1960s trade-off which had very limited memory capacity and relatively huge I/O capacity. It allowed records to be tagged and the I/O requests could search tags ... and then read the specific record found with a specific tag.

OS's made extensive use of the feature for VTOC (basically the file directory) and PDS (partitioned data sets ... bascially something like a special onelevel deep directory/library, for instance much of the system was placed in sys1.linklib PDS ... and entries could be found by doing a CKD search of the entries in the directory).

one I/O operation could find the appropriate information w/o requiring any filesystem cached information in memory.

The problem was that by at least the mid 70s, the trade-off had reversed, memory was becoming abundant and I/O capacity was being strained. By that time, overall system efficiency was improved by doing filesystem information caching and not wasting I/O capacity doing (linear) search of tags on disks.

While being able to show significant increased system thruput by not using any count-key-data features, the use was ingrained in many places in the system. Around, 82 or 83, I got a price-tag of $26m for the MVS change to not use multi-record & multi-track search for VTOC and PDS (i.e. regardless of whether the system was using 3380s ckd devices or 3370s fba devices).

CMS, VM/370, CP/67, etc supported CKD disks from the '60s ... but never relied on the multi-record & multi-track search features of CKD ... and so were able to also support FBA disks with relative ease.

However, it was not possible to get MVS to stop using multi-record and multi-track search operations ... even when it was possible to demonstrate that MVS modified to run with CKD searchs had higher system thruput (even using the exact same devices).

random refs:
https://www.garlic.com/~lynn/93.html#29
https://www.garlic.com/~lynn/94.html#35
https://www.garlic.com/~lynn/97.html#16
https://www.garlic.com/~lynn/97.html#29
https://www.garlic.com/~lynn/99.html#75
https://www.garlic.com/~lynn/2000f.html#18
https://www.garlic.com/~lynn/2000f.html#19
https://www.garlic.com/~lynn/2000f.html#42

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

> 512 byte disk blocks (was: 4M pages are a bad idea)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: > 512 byte disk blocks (was: 4M pages are a bad idea)
Newsgroups: comp.arch
Date: Fri, 29 Dec 2000 22:20:29 GMT
... oops typo; with -> w/o

However, it was not possible to get MVS to stop using multi-record and multi-track search operations ... even when it was possible to demonstrate that MVS modified to run w/o CKD searchs had higher

oops typo, i.e. MVS modified to cache and not do CKD search had higher system thruput.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Egghead cracked, MS IIS again

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Egghead cracked, MS IIS again
Newsgroups: comp.security.unix,comp.os.ms-windows.nt.admin.security,comp.security.misc
Date: Sun, 31 Dec 2000 19:30:27 GMT
safado writes:
Yes, they blamed an email message. I remember because only Outlook has the security holes that would allow it to happen. After the Internet Worm hit email users about fifteen years ago, the functions that allowed email messages to infect systems were removed from all major email programs. Even after knowing this, though Microsoft had the nerve to put them back in.

there was also an email incident on the internal network in the early '70s ... which instituted a number of security features to prevent similar activities from occurring again.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/


next, previous, subject index - home