From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: TSS ancient history, was X86 ultimate CISC? designs) Newsgroups: alt.folklore.computers Date: Sat, 25 Nov 2000 01:20:01 GMT... i was going thru some old boxes related to early CP/67.
Lincoln Labs was the first place that cambridge installed CP/67 sometime in '67 (the university that I was at was the 3rd cp/67 installation, the last week in jan. 1968).
In any case, ran across Lincoln Laboratory Multi-Programming
Supervisor 360D-5.1-004 ... SHARE contributed program library.
Date of Submittal, May 26, 1966 by Joel Winett and Frank Belvin.
Also LLMPS was the initial core infrastructure used by Michigan for the Michigan Terminal System (MTS).
In addition to CP/67 (TSS/360 alternative) developed for the 360/67 (by cambridge science center), another system developed for the 360/67 was MTS done by Michigan.
random refs:
https://www.garlic.com/~lynn/2000.html#89
From the LLMPS manual
E. Preparing a Supervisor System
The Lincoln Laboratory Multi-Programming Supervisor is formed from the
main supervisor program SUPER, the tale subprograms TABLES and BUFFS,
the supervisor subprograms FNDJTL, FIDCQ, and FBJTJL, and the problem
state subprograms EXIT and JBRP. The operation of the supervisor is
controlled through the use of a set of tables, the Job List, the Job
Table, the Device List, and the Device Table. The format of these
tables are defined using equalities and all references to the fields
in a table are made symbolically.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cryptogram Newsletter is off the wall? Newsgroups: sci.crypt Date: Sat, 25 Nov 2000 15:12:02 GMTvjs@calcite.rhyolite.com (Vernon Schryver) writes:
W/o opaque identifier that TCP pushed down and was returned in ICMP responses ... so when IP pushed the ICMP up ...the upper layers could filter out ICMPs not for them ... somebody has to filter ICMPs that might be pushed up one or more instances.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: TSS ancient history, was X86 ultimate CISC? designs) Newsgroups: alt.folklore.computers Date: Sat, 25 Nov 2000 15:32:58 GMTrandom other urls on 360/67 from search engines
& of course Melinda's page
https://www.leeandmelindavarian.com/Melinda#VMHist
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: virtualizable 360, was TSS ancient history Newsgroups: alt.folklore.computers Date: Sat, 25 Nov 2000 15:45:37 GMTnospam@nowhere.com (Steve Myers) writes:
There was actually two distinct problems with running virtualized virtual memory systems ... one was the "shadowing" of all the virtual memory hardware components aka software analogy of the virtual-memory lookaside buffer giving virtual->real translation ... the virtual->real tables in the virtual machine had to be emulated in the VM supervisor because the virtual machines tables were really translating from one virtual space to another virtual space, the real CP had to shadow the tables to provide the real addresses.
The other is that a virtual machine page replacement algorithm approximating LRU ... and any real machine page replacement algorithm approximating LRU ... good get into real pathological situations. Effectively the CP page replacement algorithm was doing LRU replacement under the assumption that the page access in virtual memory were characteristic of "least recently used". However, a virtual machine page replacement algorithm would start to look more like "most recently used" ... i.e. the least recently used page was the page that was most likely next to be used. LRU alrogithms running under LRU algorithms violate the lower level LRU assumptions.
The VS1 had a little more than operating in real-mode. It did lay-out its virtual memory on one-for-one bases in a virtual machine ... but it also supported psuedo page fault & psuedo page complete interrupts being reflected from the CP supervisor (i.e. CP could notify the VS1 supervisor when applications running under VS1 had page faults ... allowing VS1 to perform task switch).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: virtualizable 360, was TSS ancient history Newsgroups: alt.folklore.computers,comp.arch Date: Mon, 27 Nov 2000 14:46:44 GMTnouce@multics.ruserved.com (Richard Shetron) writes:
on the CSC machine there was frequently several MIT students. One day the CSC system crashed three times before it was identified and fix generated. the genesis of the failure was a particular student who was called and asked to stop performing the particular operation until the fix was installed. the student continued to perform the operation, crashing the system a couple more times before their account was permantly removed. the student later complained to their faculty adviser that their access shouldn't have been permanently removed regardless of what they did.
traditional approach in situations like that ... was the student was given the source and asked to fix the problem that they had uncovered but there wree a few cases where the only interest was in seeing how many times the system could be crashed.
27 crashes in single day:
http://www.lilli.com/360-67 (corrected) https://www.multicians.org/thvv/360.67.html
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: e-commerce: Storing Credit Card numbers safely Newsgroups: comp.security.misc,alt.security,alt.computer.security Date: Mon, 27 Nov 2000 22:04:21 GMT"Joseph Ashwood" writes:
to a large extent, x9.59 (and similar protocols) are privacy neutral (i.e. not divulging privacy information) and eliminate the account-number as a point of attack ... since they are defined as being usable only in authenticated transactions.
effectively, account numbers usable in non-authenticated transactions become shared-secrets ... so that knowledge of the existance of an account number numerical value needs to be restricted.
random refs:
https://www.garlic.com/~lynn/2000f.html#72
https://www.garlic.com/~lynn/aadsm2.htm#straw
https://www.garlic.com/~lynn/aadsm3.htm#cstech13
https://www.garlic.com/~lynn/
X9 is ansi financial industry standards body
http://www.x9.org/
TC68 is the ISO (international findancial standards body, of which X9
is a member)
http://www.tc68.org/
ABA (american bankers association) serves as the scretariat of both X9
(US) and TC68 (iso/internation) financial standards body.
http://www.aba.com/
disclaimer ... I was a member of the X9A10 working group and helped write the X9.59 standard. The charter given the X9A10 working group was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-base) transactions (not just credit cards and/or even all payment cards).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: virtualizable 360, was TSS ancient history Newsgroups: alt.folklore.computers,comp.arch Date: Tue, 28 Nov 2000 17:28:54 GMThow 'bout virtualizing 360 on intel platform ... blurs the line even further with what is a mainframe
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Tue, 05 Dec 2000 05:53:01 GMTeells@US.IBM.COM (John Eells) writes:
the 360/165 -> 360/168 ... added faster memory from ???? to around 400 nanoseconds ... and eventually 360/168-3 had 64kbyte cache (presumably fewer cache misses as well as lower elapsed time when there was a cache miss).
various kinds of things done to the hardware & microcode dropped the avg. cycle time per instruction from around 2.1 avg (80ns) cycles per 370 indstruction on 165 to about 1.6 avg (80ns) cycles per 370 instructgion on 168 (faster memory, bigger cache, and improved hardware/microcode reducing avg. cycles per 370 instruction all contributed to 168 being faster than 165).
the lower end machines with vertical microcode ... and 10 instructions per 370 indsturction ... needed 5mip (native) machine to yield 1/2 mip 370 .. the 370/125 needed something like 1mip native to yield 1/10th that in 370 cycles.
the type of instructions in operating system supervisor typically dropped into the microcode on these machines byte-for-byte (i.e. 6000 bytes of 370 code was about 6000 bytes of 370/148 microcode). The basic ecps packaged then saw 10:1 speedup by having critical pieces of the operating system implemented in native machine microcode (other features saw more because the two domains were completely different and didn't have save/restore registers across the boundary).
random ref:
https://www.garlic.com/~lynn/94.html#21
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Wed, 06 Dec 2000 15:26:43 GMTRick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Wed, 06 Dec 2000 17:19:21 GMTjbroido@PERSHING.COM (Jeffrey Broido) writes:
the 65/67 fetched instructions in double word at 750ns per double word. RR time (2 byte instruction) would have included 1/4th the 750ns instruction fetch time (187ns) plus the actual instruction decode and execution ... for about 600ns. rs instruction (at 4bytes) would have included 1/2 the 750ns instruction fetch (375ns) plus operand storage fetch/store (750ns) plus instruction decode and execution (i.e. 1125ns plus instruction decode and exeuction). There was also something like 100-150ns more if rs instruction used both base&index addressing (as opposed to just base reg. addressing).
the 67, when operating in virtual memory mode, increased the effective memory access time by 150ns to 900ms allowing for the associative array translation from virtual->real. rr instructions then became 1/4th the 900ns for instruction fetch (225ns) plus instruction decode and execution. rs instruction became 1/2 900ns for instruction fetch and 900ns for operated fetch/store plus instruction decode and execute (450ns+900ns ... 1350ns plus instruction decode and execute).
the associative array was an 8-entry ... fully associative virtual->real lookup. A miss in the associative array ... of course would have resulted in access to cr0 for the segment table, index the segment table, get the pagetable pointer, index the pagetable, retrieve the page virtual->real translation and put it into the associative array ... and then decode the address.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Wed, 06 Dec 2000 17:56:53 GMTTony Harminc writes:
it was somewhat tailored to MVS since one of the address bits used to index the tlb was the 8mbyte address bit. VM virtual address spaces were typically less than 8mbytes and so only have the tlb entries tended to be available. MVS laid out the kernel in virtual memory with supervisor <8mbytes & application >8mbytes ... so it tended to have half the tlb entries for the supervisor and half the tlb entries for applications.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Wed, 06 Dec 2000 18:18:51 GMTAnne & Lynn Wheeler writes:
65(& 67) had outboard channels ... so had more cpu available for program execution.
the 67 multiprocessor was unique. It had tri-ported memory and an independent channel controller ... allowing simultaneous access for both cpus and the channel controller. tri-ported memory slowed memory access down by about another 15% (i.e. even a half-duplex 67 ... operating in single processor 65 mode running MFT ... memory access took 15% longer than real 65).
the interesting thing was that a half-duplex 67 had significantly higher workload thruput than a simplex 65/67 ... for workload that was both 100% CPU and concurrent heavy I/O (typical of many cp/67 installations). The difference was that the 15% memory bus access slowdown for each memory access (for the tri-ported memory) was more than offset by the elimination of the memory bus cycle stealing that I/O activity caused in single ported memory).
The 115/125 were somewhat unique low-end processors (and boeblingen is reputed to having gotten their hands slapped for the design). The 115/125 supported a 9-port common memory bus that typically had 3-5 (up to 9) microprocessors all sharing the same memory bus in a generalized multiprocessor architecture. The 115 all had the same microprocessor installed at each memory bus position ... the difference was that the different microprocessors had different program loads ... i.e. one of the microprocessors had 370 instruction simulation program load ... the other microprocessors had control unit and/or other function/feature program loads. The 125 differed from the 115 only in that the microprocessor that had the 370 instruction simulation program load was about 25% faster than the other micorprocessors (given the 125 about a 25% faster 370 execution than the 115).
The follow-on to the 158/168 was 3031, 3032, & 3033. The big difference was that the 3031, 3032, & 3033 had outboard channel directors (last seen in the 360/67). The 303x channel directors were essentially the 158 horizontal micro-code engine w/o the 370 instruction simulation (just the channel I/O support).
The 3031 was essentially a 158 repackaged for outboard channel director (i.e. effectively a two processor 158 ... one dedicate for 370 instruction and one dedicated for channel i/o) ... with other misc. enhancements the 3031 was about 40% faster than a 158.
The 3032 was essentially a 168 repackaged with outboard channel director.
The 3033 started out as a 168/3032 wiring diagram mapped to newer chip technology that was about 20% faster than the 168 chip technology. The new chip also had about 10 as many circuits per chip than the 168 chip technology. The straight-forward remap started out with the 3033 being approx. 20% faster than the 3032. Various optimization then redesigned specific portions of the 3033 to take advantage of more intra-chip operations than just a straightforward inter-chip operations (compared to the straight 168/3032 remap).
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.) Newsgroups: alt.folklore.computers Date: Thu, 07 Dec 2000 15:54:04 GMTEric Fischer writes:
the original CP would up the user program's dispatching quanta anytime a user program did any sort of terminal i/o. as a result, various people ... who didn't feel a blip every two seconds of cpu wasn't sufficient would throw in terminal i/o possibly every 100 ms to try and increase their thruput.
it was one of the reasons that i had to do dynamic feedback fair-share scheduling. the change was that time-slice quanta for "interactive" was much smaller ... and the dispatching was proportionally smaller also (in theory two tasks ... one running consistently with small interactive time-slice and another task running consistently with background time-slice ... would actually accumulate CPU resources at the same rate ... all other things being equal). that minimized the motivation to throw in the gratuitous terminal i/o.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mess (was: Re: What the hell is an MSX?) Newsgroups: alt.folklore.computers Date: Sat, 09 Dec 2000 16:32:10 GMTLarry Anderson writes:
i've claimed that a large part of the success was combination of 1) ibm name, 2) spreedsheet (& misc. other business) software (mostly non-ibm), & 3) mainframe terminal emulator
a single keyboard/terminal on desktop could act as both local computing and also provide the necessary mainframe access for the rest of the ncessary business computing, a basic pc was only a little more expensive that an ibm mainframe terminal. a company might have 50k-100k to millions of mainframe terminals i.e. the ibm mainframe terminal market at the time was much larger install than personal computers ... upgrade each of the terminals created huge install base for the people writing business (or other kinds of) software.
In this market, the justification for personal computing on the desk wasn't the total cost of the pc ... just the incremental price difference between an ibm-brand mainframe terminal ... and an ibm-brand pc with host terminal emulation. At some point, the software base got large enuf that some businesses could start to justify the complete PC price for local computing (not just the incremental difference) ... i.e. along the way various market scale thresholds started to kick in.
i had this argument with some of the mac guys before the mac shipped (at the time, my brother was apple regional sales guy for a couple states and he would be periodically in town and we would get togehter for dinner with some of the mac people).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mess (was: Re: What the hell is an MSX?) Newsgroups: alt.folklore.computers Date: Sat, 09 Dec 2000 18:09:41 GMTin the mid to late 70s ... mainframe terminals tended to be in terminal rooms and/or for data entry, call center, processing people ... there were a few scattered around on desk for professional programmers and engineers. authorizing a terminal on a person's desk was part of the annual budget planning and required VP-level sign-off authorization. there was starting to be some exceptions like the HONE system for all the branch and field people ... where it was becoming part of standard product handling process that terminal had to be used.
... misc refs:
https://www.garlic.com/~lynn/2000f.html#62
even the prospect of email, by itself wasn't enuf to break the barrier (i.e. the internal network was larger than the whole arpanet/internet up thru possibly sometime in 1985).
one friday night over lots of beer we were boundering on how to break some of this log jam. we needed a killer app to go with email and some business numbers. We came up with the online corporate telephone book and requirement that it had to be implemented in less than 2 person weeks time and take less than 1/2 of one person time to support and maintain it for the whole company. We also did some calculations that showed that the 3year depreciated cost of terminal was less than the monthly business telephone cost ... that every employee got on their desk w/o question.
shortly after that (combination of email, online corporate telephone book, and cost analysis) there was a 3 month period where the majority of new terminal allocation out of the annual budget disappeared onto middle management and executive desks. Shortly after that they eliminated the requirement that terminal authorization on individual desks required a VP signature.
as PCs started to become available with terminal emulation ... it was still possible to show that a 3 year depreciated cost for PC was still less than monthly business phone on people's desks. The idea that it was a single keyboard display ... and not a whole lot of keyboards & displays that all had to fit on the same desk was an important issue.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Mon, 11 Dec 2000 16:26:29 GMTsmetz@NSF.GOV (Metz, Seymour) writes:
My understanding was that a major difference between 360/195 & 370/195 was in the RAS area, instruction retry, etc (and it never got any of the virtual memory stuff).
I never actually worked on the 195 ... I did spend some time with the engineers tho ... they were looking at building a dual i-stream version. Under most codes, 195 had horrible problem with branches draining the pipeline. A little hardware ... extra PSW, extra set of registers, a one-bit i-stream tag for the pipeline, etc ... would allow supporting a dual i-stream (from software standpoint, a two-processor) machine ... which had a somewhat better chance of keeping the pipeline feed.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Mon, 11 Dec 2000 17:25:49 GMTAnne & Lynn Wheeler writes:
The 370 reference summary on the front lists:
conditional swapping cds, cs cpu timer & clock comparetor sckc, spt, stckc, stpt direct control rdd, wrd dynamic address lra, ptlb, rrb, stnsm, stosm input/output clrio, siof multiprocessing sigp, spx, stap, stpx psw key handling ipk, spka...............
there were also the long instructions ... mvcl & clcl
& the byte instructions icm, clm, stcm
the trace & per stuff mc,
floating point extended
....................
many of the above used the B2 opcode and than the byte following B2 as the sub-opcode (adding another 256 possible instructions).
multiprocessing, dynamic address, and conditional swapping weren't part of the base, original 370.
the CP/67 "H" system provided virtual 370s running on 360/67. For original 370 this required intercepting the prg1s (op exception) and simulating. Prior to that, most of cp/67 had been involved in intercepting prg2s (priv. excep).
cds & cs also wasn't in the original 370. it was done at CSC as outgrowth of the fine-granualarity MP locking work. Ron Smith (one of the people that owned POP in POK) told us that we wouldn't be able to get a MP-specific instruction added and that we needed to come up with a paradigm that allowed the instruction to be used in a uniprocessor mode. That took a couple months and resulted in the programming notes on how to use cs/cds for managing stuff in non-priv. tasks. The original instruction definition was CAS (compare and swap) ... the person that did the original work initials were CAS and it took a couple months to come up with mnenomic that were his initials. That was then changed to CS & CDS to include both single word and double word.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mess (was: Re: What the hell is an MSX?) Newsgroups: alt.folklore.computers Date: Tue, 12 Dec 2000 00:18:39 GMTjcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
Later when ibm pcs started appearing as home terminals, PVM 3270 simulator support was upgraded so that there was dictionary (index data already been transmitted), compression & some other transmission optimization schemes with the terminal simulater on the pc (attempting to mask some of the slowness of the 2400 baud modems).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Tue, 12 Dec 2000 00:34:18 GMTb19141@ACHILLES.CTD.ANL.GOV (Barry Finkel) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could CDR-coding be on the way back? Newsgroups: comp.lang.lisp,comp.arch Date: Tue, 12 Dec 2000 16:58:31 GMTJan Ingvoldstad writes:
already the majority of the bytes are not text ... but binary of one form or another. lets say that text is only 10% of the total ... and something is worked out to handle the binary in some other way. That would still say in 20 years that every person on the planet would have to be generating around 750,000 text characters per day; say 6 characters per word ... around 120,000 words/day ... say 300 words/page ... around 400 pages/person/day.
majority of the usenet growth has been binary with some increase due to more people involved. however, once every person in the world is spending all their time only generating material for usenet ... i would think that the growth would start to level off.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360/370 instruction cycle time Newsgroups: bit.listserv.ibm-main Date: Tue, 12 Dec 2000 17:06:20 GMTChris_Blaicher@BMC.COM (Blaicher, Chris) writes:
when virtual memory was announced ... 135s & 145s had DAT enabled by a new microcode load. The 155 & 165 had to have hardware installed in the field to allow them to support virtual memory. There were lots of 155s & 165s field upgraded.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: No more innovation? Get serious Newsgroups: alt.folklore.computers Date: Wed, 13 Dec 2000 20:24:19 GMTbenc@krustbustr.hawaga.org.uk (Ben Clifford) writes:
many of these features have tended to obstruct the full-time/powerq user that carries the context in their head
the analogy is something like making a car that a five year old could drive the very first time they enter an automobile ... w/o requiring them to ever have had prior experience, training and/or practice in establish the mental context associated with driving a car .... i.e. high schools have tended to offer both driving classes and typing classes to give people sufficient skill base & context.
Even with all that ... indy race cars tend to be somewhat different than the run of the mill street car and are much more effective in the hands of a skilled and experienced user.
Frequently, the novice GUI argument ... is that it reduces everybody to the productivity of the lowest common denominator (experienced indy drivers are unable to operate a hypothetical gui indy car any more effectively than an untrained five year old).
From an economics standpoint ... there is at least 3-4 orders of magnitude more casual users in need of computer training wheels than there are professional power racer computing users. The corresponding market can be worth hundreds of billions as opposed to hundreds of millions ... so what if the productivity of the power racer computing user is reduced to that of the training wheel user.
as a total aside ... recently going thru some boxes that had been in storage looking for something else ... i ran across a whole bunch of stuff from doug (copies of early stuff, press releases, the whole augment description and documentation, etc). When M/D was buying tymshare ... I got called in to evaluate gnosis for them ... since that wasn't going to be part of the M/D operation ... and I also got to spend some time talking to doug since he also wasn't going to be part of the ongoing M/D operation.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mess Newsgroups: alt.folklore.computers Date: Wed, 13 Dec 2000 20:29:46 GMTcbh@REMOVE_THIS.teabag.fsnet.co.uk (Chris Hedley) writes:
it also eliminated being able to make field engineering modifications to the terminal to improve the human factors (i.e. like FIFO box to mask keyboard locked problem and modifications to get the repeat, delay, and various cursor position features to operate like one would wish).
random refs:
https://www.garlic.com/~lynn/2000c.html#63
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A question for you old guys -- IBM 1130 information Newsgroups: bit.listserv.ibm-main Date: Thu, 14 Dec 2000 00:27:17 GMTi_am_bobf writes:
somebody had ported spacewars to run on it (from pdp1?)
it was also somewhat the genesis of the internal network ... the first "network" link at csc was between the 360/67 and the 1130. the csc network support went on to grow into vnet, the internal network, bitnet, etc i.e. the internal network was larger the (whole) arpanet/internet until approx. 1985.
random refs:
https://www.garlic.com/~lynn/2000b.html#67
https://www.garlic.com/~lynn/2000e.html#32
https://www.garlic.com/~lynn/2000d.html#15
https://www.garlic.com/~lynn/97.html#2
... see the scamp (5100 pc) reference emulating 1130 allowing apl/1130
to run
http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/
page i just ran across doing alta-vista search
http://www.mindspring.com/~hshubs/1130/
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SSL as model of security Newsgroups: comp.security.unix Date: Thu, 14 Dec 2000 18:25:17 GMTJorge Kinoshita writes:
exchange of (random) secret key that is then used for session encryption
and
validating that the node that you think you are connecting to somehow relates to the node information listed in the server's digital certificate.
in the past there have been various problems with the method that some SSL implementation code used to generate the secret key, the number of bits of the key and/or the symmetric encryption algorithm used for the session encryption. there have also been issues during key & algorithm negotiation phase where man-in-middle attack could inject something that resulted in the negotiations downgrading from 128-bit to 40-bit.
independent of the symmetric encryption area of SSL is the part of the protocol dealing with ... are you really talking to the node that you think you are talking to. that gets into does the node information listed in the certificate mean anything and whats the process by which an organization can acquire an acceptable certificate that carries specific node information (i.e. you could have the strongest symmetric encryption algorithm in the world with a 512bit key to protect session data in flight and still be giving all your information to the bad guys).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Who Owns the HyperLink? Newsgroups: alt.folklore.computers Date: Thu, 14 Dec 2000 21:29:26 GMT"David C. Barber" writes:
engelbart's augment work or nelson's hypertext work may or may not qualify as prior art.
NLS/augment had hypermedia features (is there overlap with hyperlink
... can you have hyper media/text w/o hyperlinks?)
http://www3.zdnet.com/yil/content/mag/9611/hyper/foot4.html
http://www.cc.gatech.edu/classes/cs6751_97_fall/projects/ms-squared/engelbart.html
http://scis.nova.edu/~speranor/DCTE790Assignment2-HTML.htm
https://web.archive.org/web/20010219211434/http://scis.nova.edu/~speranor/DCTE790Assignment2-HTML.htm
nelson & hypertext
http://hoshi.cic.sfu.ca/~guay/Paradigm/Nelson.html
https://web.archive.org/web/20010406064423/http://hoshi.cic.sfu.ca/~guay/Paradigm/Nelson.html
http://aisr.lib.tju.edu/~murray/internet/sld041.htm
http://www.sfc.keio.ac.jp/~ted/XU/XuPageKeio.html
https://web.archive.org/web/20010411074655/http://www.sfc.keio.ac.jp/~ted/XU/XuPageKeio.html
http://www.sun.com/950523/columns/alertbox/history.html
https://web.archive.org/web/20010401024947/http://www.sun.com/950523/columns/alertbox/history.html
from one of the above ..
Ted Nelson
Ted Nelson's contribution to the development of hypertext and
hypermedia are profound, extending even to the coining of the terms
hypertext and hypermedia. Since 1960, he has been developing a
comprehensive paradigm for the implementation of a distributed
hypermedia system that covered the full spectrum of issues; from the
algorithms to the economics. The result of this paradigm is the
ongoing Xanadu project.
The purpose of Xanadu is to establish Nelson's vision of the
Docuverse. Docuverse is the term he coined to describe a global
online library containing, in hypermedia format, all of humanity's
literature. This concept of the Docuverse is on of the foundational
paradigms of the WEB.
Amazingly, he had a prototype running in 1965. This prototype modeled
many of the concepts that make up any hypermedia system, including the
WEB.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could CDR-coding be on the way back? Newsgroups: comp.lang.lisp,comp.arch Date: Thu, 14 Dec 2000 23:56:53 GMTNicholas Geovanis writes:
However, by the late-70s having all supervisor functions in the same address space as the application along with various supervisor subsystem requirements were again starting to serverely strees the 24bit address limit.
while some of the MVS gurus might have believed that they were the biggest address space hogs in the world, some MVS installations were having difficulty leaving even 6mbytes (of the 16mbytes) available to application program. There were actual applications in the '70s that demonstrated large address space appetites. Some of these were large database transaction subsystems that had to exist in the tiny space left in the 16mbytes space after the MVS supervisor and subsystem requirements were met.
In the initial port of apl/360 to cms/apl ... the apl workspace limited was opened up from typically 32k-64k bytes to just under 16mbytes. There were a number of applications that actually took advanage of the increased workspace size.
One of those were the HONE service. This was the service in the '70s and '80s that supported world-wide sales, marketing, hdqtrs, and field support operations. One example, starting sometime in the mid '70s, IBM mainframe orders became so complex that it couldn't be done manually, a HONE application was needed to fill-in the order. Another big use of HONE was for economic planning & modeling ... much of the "what-if" processes done today on PC spreadsheets were performed in APL.
In '77, the US HONE operations were consolidated in a single location in california with, what was at the time believed to be the largest single-system image operation in the world (cluster of SMP processors sharing large disk farm). In '78/'79, the single-system image was replicated in dallas and boulder providing disaster survivability support (in case of national disaster, like an earthquake in cal.). This was in addition to various HONE clones that resided in numerous countries around the world.
Almost the entire HONE "experience" was delivered first on cms/apl and then later on apl/cms.
random refs:
https://www.garlic.com/~lynn/2000f.html#62
https://www.garlic.com/~lynn/2000f.html#30
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/99.html#112
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could CDR-coding be on the way back? Newsgroups: comp.lang.lisp,comp.arch Date: Fri, 15 Dec 2000 00:44:38 GMT... oh yes, and a somewhat separate issue for the 370 ... in addition to virtual 24bit address was the issue of real 24bit address. With sufficient concurrent applications it was possible to start to stress the 16mbyte real storage limits (and possibly precipitate page thrashing).
many 3033 shops in late '70s (basically a souped up 370/168) started seeing these real-storage constraint problems.
the issue was how to address the problem.
CCW (i.e. i/o transfer) already supported 31bit real address with IDALs.
To get some slight relief, the 3033 introduced a hack for getting more than 16mbyte real storage. The 370 pagetable entry is 16bits, 12bits for specifying real page number (when combined with 12bit, 4k pages, yields 24bit addressing), an invalid bit, a protection bit, and two unused bits ... something like
NNNNNNNNNNNNPxxI
where "xx" are the unused/undefined bits. The 3033 hack was to use the two "xx" bits and prefix them to the 12bit page number to allow addressing up to 2*14 real pages ... or 2*26, 64mbytes of real storage. Executable code was still limited to 24bit virtual addresses but it was possible to allocate virtual pages in real storage above the 24bit line by setting the appropriate bits in the page table entry. And of course, the standard 370 CCW IDALs already had 31bits available for addressing real storage in i/o operations.
cross-memory services was also introduced with the 3033. in an attempt to help get some of the supervisor subsystem functions out of the same address space as the application (at least get things to the point where maybe a whole virtual 8mbytes was available to applications) ... and not to have a significant downside impact on the MVS "address-pointer" parameter paradigm, these supervisor subsystem functions had to reside in their own address space while still directly supporting services requiring addressing of the application virtual address space. cross-memory services introduced new addressing modes allowing instructions to address virtual storage different than the virtual address space that they were executing in.
random refs:
https://www.garlic.com/~lynn/99.html#99
https://www.garlic.com/~lynn/99.html#7
https://www.garlic.com/~lynn/2000e.html#57
https://www.garlic.com/~lynn/2000g.html#11
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could CDR-coding be on the way back? Newsgroups: comp.lang.lisp,comp.arch Date: Fri, 15 Dec 2000 01:01:48 GMTAnne & Lynn Wheeler writes:
giving the 3033 announce & ship dates
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could CDR-coding be on the way back? Newsgroups: comp.lang.lisp,comp.arch Date: Fri, 15 Dec 2000 20:12:58 GMTpekka@harlequin.co.uk (Pekka P. Pirinen) writes:
one of the other projects at csc was vs/repack ... which included page reference monitoring and drawing "pictures" with special TN train of the operation. At times the halls of 4th floor, 545 tech. sq were papered with these reference pictures. basically five foot long section on reversed green-bar paper with storage accesses along the length and time along the width. A whole series of these were taped to the wall (length running vertical) giving several seconds of program execution as you moved down the hall.
The bottom of the wall tended to be solid pattern of use ... but there was a very strong sawtooth pattern where pages were touched, used for very short periods and then storage allocation moved up until it reached top of storage ... and then there was a solid line as all allocated data was compacted back down to low storage. In virtual memory environment, this tended to result in an application using maximum amount (all) of available virtual memory regardless of the application size or complexity ... with bimodel reference pattern ... partially LRU and partially MRU (i.e. the least recently used page wasn't likely to be used again for awhile).
Part of the less obvious things that we had to do in the '71 time-frame for the port of apl/360 to cms/apl was a dynamically adaptive memory collector.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: stupid user stories Newsgroups: alt.folklore.computers Date: Mon, 18 Dec 2000 14:31:31 GMT"Paul Grayson" writes:
most of the other cord keyboards i've seen look more like flat piano keys and the device isn't designed to move. It has been a while ... but I believe that I remember seeing ("flat") cord keyboard used with the engelbart/augment system at tymshare (I would have to go back and leaf thru the documentation).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Multitasking and resource sharing Newsgroups: bit.listserv.ibm-main Date: Mon, 18 Dec 2000 20:55:33 GMTedjaffe@PHOENIXSOFTWARE.COM (Edward E. Jaffe) writes:
random ref:
https://www.garlic.com/~lynn/97.html#19
https://www.garlic.com/~lynn/99.html#176
https://www.garlic.com/~lynn/2000g.html#16
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: does CA need the proof of acceptance of key binding ? Newsgroups: sci.crypt Date: Tue, 19 Dec 2000 21:17:57 GMTMok-Kong Shen <mok-kong.shen@t-online.de> writes:
a person has an account number.
they need to get some bank characteristic and public key bound together into a certificate. for the most part, that feature is not being supported by the bank where the account is located.
a transaction is sign with a public key, the transaction is also (at least) partially encrypted (especially the account number), the transaction, an appended digital signature and a certificate is transmitted to a merchant over the internet.
the merchant then does some stuff and transmits the transaction, the digital signature and the certificate to an internet gateway.
the internet gateway uses the public key in the certificate to verify the digital signature.
the internet gateway then generates an (iso) 8583 transaction turning on the authenticated signature flag.
the transaction is then handled in the normal way.
.........
couple of notes ...
this is still a standard account number ... i.e. effectively a shared-secret that can also be used in vanilla, non-authenticated transactions ... as a result there is significant exposure storing it when it isn't "at rest" (i.e. both SET and SSL encrypt the number while "in flight" ... but the big exposure when it leaves internet transmission and actually is used in standard processes).
for retail transactions, identity is a serious privacy issue. authentication wants to know that the entity authorized to execute transaction against an account is the entity executing the transactions. divulging identity (aka an identity certificate) for retail transactions represents a serious invasion of privacy ... and I believe is also counter to current EU regulations (stating that all retail transactions need to be as anonomous as cash ... aka even name on existing cards and/or magstripes represents a serious privacy problem). An identity certificate could represent even more serious privacy problem that existing name embossed cards.
a couple years ago, one of the credit card associations presented numbers at an ISO meeting of 8583 transactions flowing through the network where the authenticated signature flag had been turned on ... but they knew there was no digital signature technology involved.
a couple sizings of standard SET certificates have put them in range of 4kbytes to 12kbytes. a typical 8583 transaction is 60-100 bytes and aggregate peak transaction loads can hit several thousand per second. Actually flowing the SET certificate end-to-end on any real volume of transactions represents a serious capacity problem (serious transaction size bloat).
and of course, then there is X9.59 ... standard from the X9 financial standards body ... the task given the X9A10 work group for X9.59 was to preserve the integrity of the financial infrastructure for all electronic retail payment (account-based) transactions (credit, debit, card, ach, check, etc). It has passed X9 and is entering its two year trial period.
X9.59 specifies an account number that can only be used in authenticated transactions (eliminating the problem with the account number being a shared-secret representing significant fraud exposure). X9.59 also recommends end-to-end authenticated transactions (i.e. from beginning, entry-level security principles, i.e. the party responsible for authorizing and executing the transaction is also responsible for authenticating the transactions).
misc refs to x9.59 work
https://www.garlic.com/~lynn/
--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: does CA need the proof of acceptance of key binding ? Newsgroups: sci.crypt Date: Wed, 20 Dec 2000 15:04:41 GMTcwsheng writes:
that is ignoring the previously mentioned issue, in the retail transaction case, that real identity proofing represents a serious invasion of privacy (i.e. any requirement to use a real consumer identity certificate in consumer retail transaction)
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: does CA need the proof of acceptance of key binding ? Newsgroups: sci.crypt Date: Fri, 22 Dec 2000 04:08:12 GMTMok-Kong Shen <mok-kong.shen@t-online.de> writes:
interesting challenge is to make it even easier ... comparable to "card activation" when receiving a new card but instead of audio/response telephone call, let's say an online bank website with an SSL session connection that then requires some series of challenge/response about things that both you and the bank know, including things like name, address, account number, various things from statements, and misc. other details.
Depending on the number & quality of these interactions, the bank can assign a risk value associated with the registering of a public key; in theory the larger the number &/or better the qualify of interactions the closer the risk assignment approaches an "in person visit".
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: stupid user stories Newsgroups: alt.folklore.computers Date: Fri, 22 Dec 2000 12:46:25 GMTEric Fischer writes:
while there I went to a movie in downtown madrid, in a addition to the movie they had a 15 minute short ... produced at the univerisity ... a very surrelistic thing that I didn't completely follow ... but prominent was a wall of tv sets which were all scrolling the same text at 1200 baud. imagine my astonisment when i recognized a vm/370 kernel "load map" ... what's worse I could tell the year & month of the kernel build.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: stupid user stories Newsgroups: alt.folklore.computers Date: Fri, 22 Dec 2000 12:48:22 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM) Newsgroups: comp.arch Date: Fri, 22 Dec 2000 18:12:02 GMT"Bill Todd" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could CDR-coding be on the way back? Newsgroups: comp.arch Date: Sun, 24 Dec 2000 20:34:36 GMTDel Cecchi writes:
probably the highest traffic hierarchy on the internal network in the early to mid '80s was IBMPC (as an aside, internal network was larger than all of internet/arpanet until around '85).
a flavor of it was shared with bitnet members ... somewhat leading to listserv on the general internet today (and some forums currently gatewayed to usenet under bit. hierarchy).
random refs:
https://www.garlic.com/~lynn/94.html#33b
https://www.garlic.com/~lynn/99.html#24
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: No more innovation? Get serious Newsgroups: alt.folklore.computers Date: Mon, 25 Dec 2000 16:57:01 GMTbenc@krustbustr.hawaga.org.uk (Ben Clifford) writes:
random ref
https://www.garlic.com/~lynn/94.html#22
Cary was flying up to Provo from San Jose about once a week; the
DataHub project had a contract with a small group in Provo to write
some of the DataHub code. Eventually, GPD canceled the project and
let the group in Provo have all the rights. I believe this was the
genesis of a (PC) network company that is still around in the Provo
area.
----------------------------------
An Overview of the DataHub Project
by: Cary WR Campbell, GPD Advanced Systems Technology
ABSTRACT
DataHub is a prototype repository which provides highly-reliable
storage of shared and private data objects for project-oriented
workstations connected by a local area network (LAN).
This presentation discusses an emerging project-oriented computing
environment and outlines the DataHub Project objectives and plans.
Among the key ad tech areas investigated are:
tying DASD and LANs together
sharing of programs and data among non-cooperating users
non-stop operation
high-level design language
multi-microprocessor (off-the-shelf) design and packaging
controlled user experiments, instrumented for productivity
measurements
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Egghead cracked, MS IIS again Newsgroups: comp.security.unix,comp.os.ms-windows.nt.admin.security,comp.security.misc Date: Wed, 27 Dec 2000 16:50:08 GMTMichael Erskine writes:
The net result in x9.59 is that it is not possible to copy down an account number and use it in another (non-authenticated) transaction.
disclaimer: I participated in the definition of x9.59 as a member of the x9a10 work group.
random refs:
https://www.garlic.com/~lynn/aepay2.htm#privrules
https://www.garlic.com/~lynn/2000g.html#5
https://www.garlic.com/~lynn/aadsm3.htm#cstech8
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM) Newsgroups: comp.arch Date: Wed, 27 Dec 2000 21:08:42 GMTttonino writes:
In '70, the follow-on device was a 2305 fixed-head drive that did 3mbyte/sec transfer and had about 12mbyte capacity. A special high-performance version of the 2305 had dual-set of fixed-heads off-set 180 degrees. It read/wrote from either set of heads (cutting avg. rotational delay in half) ... but didn't do system transfer simultaneously since it was already operating at max. system transfer.
I believe by '80, disks started having servo-feedback on a per-platter basis. It started being easier to have multiple heads on the same arm, doing transfer with multiple parallel tracks on the same platter than trying to do simultaneous transfer from different platters using different heads on the same servo mechanism.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM) Newsgroups: comp.arch Date: Thu, 28 Dec 2000 17:03:43 GMTPaul Repacholi writes:
in the past there have been some vendor system configurations where a power failure might mean there was sufficient power in a disk drive to correctly write a full record with correct ECC but not necessarily enuf power to transmit the full record from processor memory, i.e the disk drive would supply zeros for the missing part of the record and then write a correct ecc. The problem was that system was expecting the disk drive to indicate an error in the case of incorrect or incomplete write. This particular failure mode resulted in parts of filesystem metadata being inconsistent and no disk error indication. Similar inconsistency might occur when filesystem integrity is dependent on disk I/O error indication and the metadata block unit changes from the same as the disk record size to a multiple of the disk record size.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM) Newsgroups: comp.arch Date: Thu, 28 Dec 2000 18:34:36 GMTAndi Kleen writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM) Newsgroups: comp.arch Date: Thu, 28 Dec 2000 21:10:01 GMTAnne & Lynn Wheeler writes:
the nominal ibm mainframe i/o process was to have a single i/o request sequence per device. nominal operation involved the device singling the processor when a request had finished, the processor taking an I/O interrupt, performing some processing on the finished request and then redriving the device with the next request in the queue. This processor device redrive latency could result in device idle, missing rotational position, and reduced device I/O thruput.
the 2305 fixed head introduced multiple concurrent request queues (in mainframe processor I/O architecture, the 2305 looked like 8 independent devices each with its own request queue). the processor could schedule eight independent requests concurrently and the 2305 could optimize the order ot request execution. Furthermore, processor device redrive latency was masked since there could be seven queued requests active at the point when any one request completed.
later in the '70s, the 3350 moveable head disk was introduced ... about 640mbytes capacity. It came with a fixed-head option ... where there were extra heads covering a portion of the platter area. The problem with the 3350 fixed area was that while it wasn't necessary to actually move the arm to read/write the data, the 3350 only supported the standard device, single request queue ... i.e. if the device was already involved in an i/o operation moving the disk arm, it was not possible to concurrently transfer data from the fixed head area.
I tried to sponsor a business case where the 3350 fixed-head feature would be enhanced to support multiple request queeing similar to the 3350 (primarly based on enhanced system paging performance). Unfortunately, the business case got squelched by a different product group that felt it was developing a dedicated device solely for enhanced system paging performance (as opposed to an incremental 3350 feature allowing 3350 to be used for both standard system data and enhanced system paging performance)
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A new "Remember when?" period happening right now Newsgroups: alt.folklore.computers Date: Thu, 28 Dec 2000 23:41:42 GMTrmonagha@smu.edu (Robert Monaghan) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM) Newsgroups: comp.arch Date: Thu, 28 Dec 2000 23:54:58 GMTdsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
original design had version number at start of record with alternating records ... which just handled the failure mode involving power-failure during write and valid ECC was not written (reading during recovery resulted in error) .... but didn't handle power-failure where zeros were propagated thru the end of record and correct ECC was written.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Use of SET? Newsgroups: alt.security,comp.security.misc Date: Fri, 29 Dec 2000 15:26:46 GMTNeil J Cummins writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Use of SET? Newsgroups: alt.security,comp.security.misc Date: Fri, 29 Dec 2000 16:01:03 GMTNeil J Cummins writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Egghead cracked, MS IIS again Newsgroups: comp.security.unix,comp.os.ms-windows.nt.admin.security,comp.security.misc Date: Fri, 29 Dec 2000 21:50:52 GMTMichael Erskine writes:
There are a number of problems of mapping to an infrastructure that spent most of its years not worrying a lot about commercial hardening issues (in general, security can be treated as a subset of the generalized failure mode problem).
Another failing was that some took message protocols that had an implicit design point of circuit based network and did a straight port to a packet network ... w/o even giving a thot to SLAs (service level agreement) and diagnostic procedures that went with the circuit based networks (completely independent of the issues of closed circuit-based and "open" packet-based). For instance, try getting a SLA for 4-nines to 5-nines availability for end-to-end from your ISP to some random other entity using some other ISP.
random refs:
https://www.garlic.com/~lynn/aadsmore.htm#dctriv
https://www.garlic.com/~lynn/99.html#49
https://www.garlic.com/~lynn/99.html#48
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/99.html#163
https://www.garlic.com/~lynn/99.html#219
https://www.garlic.com/~lynn/99.html#224
https://www.garlic.com/~lynn/rfcietff.htm
https://www.garlic.com/~lynn/aadsmore.htm#client3
https://www.garlic.com/~lynn/aadsmore.htm#setjava
in general, the ARPANET/Internet has been around just about as long as the internal network. While the internal network was larger than the whole arpanet/internet until sometime in '85 ... there was also a lot more attention given to commercial hardening issues and detailed failure-mode analysis related to the internal network's operation.
while not directly network related ... another contrast of commercial
hardening issues vis-a-vis not commercial hardening (this time with
respect to disk drives and filesystems). The following is related to
what happens if there happens to be a power-failure at just the moment
that critical filesystem write occurs.
https://www.garlic.com/~lynn/2000g.html#43
https://www.garlic.com/~lynn/2000g.html#44
https://www.garlic.com/~lynn/2000g.html#47
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: > 512 byte disk blocks (was: 4M pages are a bad idea) Newsgroups: comp.arch Date: Fri, 29 Dec 2000 22:14:20 GMTBrian Inglis writes:
OS's made extensive use of the feature for VTOC (basically the file directory) and PDS (partitioned data sets ... bascially something like a special onelevel deep directory/library, for instance much of the system was placed in sys1.linklib PDS ... and entries could be found by doing a CKD search of the entries in the directory).
one I/O operation could find the appropriate information w/o requiring any filesystem cached information in memory.
The problem was that by at least the mid 70s, the trade-off had reversed, memory was becoming abundant and I/O capacity was being strained. By that time, overall system efficiency was improved by doing filesystem information caching and not wasting I/O capacity doing (linear) search of tags on disks.
While being able to show significant increased system thruput by not using any count-key-data features, the use was ingrained in many places in the system. Around, 82 or 83, I got a price-tag of $26m for the MVS change to not use multi-record & multi-track search for VTOC and PDS (i.e. regardless of whether the system was using 3380s ckd devices or 3370s fba devices).
CMS, VM/370, CP/67, etc supported CKD disks from the '60s ... but never relied on the multi-record & multi-track search features of CKD ... and so were able to also support FBA disks with relative ease.
However, it was not possible to get MVS to stop using multi-record and multi-track search operations ... even when it was possible to demonstrate that MVS modified to run with CKD searchs had higher system thruput (even using the exact same devices).
random refs:
https://www.garlic.com/~lynn/93.html#29
https://www.garlic.com/~lynn/94.html#35
https://www.garlic.com/~lynn/97.html#16
https://www.garlic.com/~lynn/97.html#29
https://www.garlic.com/~lynn/99.html#75
https://www.garlic.com/~lynn/2000f.html#18
https://www.garlic.com/~lynn/2000f.html#19
https://www.garlic.com/~lynn/2000f.html#42
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: > 512 byte disk blocks (was: 4M pages are a bad idea) Newsgroups: comp.arch Date: Fri, 29 Dec 2000 22:20:29 GMT... oops typo; with -> w/o
However, it was not possible to get MVS to stop using multi-record and
multi-track search operations ... even when it was possible to
demonstrate that MVS modified to run w/o CKD searchs had higher
oops typo, i.e. MVS modified to cache and not do CKD search had higher
system thruput.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Egghead cracked, MS IIS again Newsgroups: comp.security.unix,comp.os.ms-windows.nt.admin.security,comp.security.misc Date: Sun, 31 Dec 2000 19:30:27 GMTsafado writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
next, previous, subject index - home