From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 10 Apr 2007 15:24:52 -0600eugene@cse.ucsc.edu (Eugene Miya) writes:
I'm waiting ...
but while waiting for you to ask them to comment ... how 'bout sample of some RFCs references (from my RFC index) for some idea of the size of early arpanet:
clicking on the ".txt=nnn" field in the RFC summary, retrieves the
actual RFC
https://www.garlic.com/~lynn/rfcidx0.htm#293
293 -
Network host status, Westheimer E., 1972/01/18 (3pp) (.txt=7639)
(Obsoleted by 298) (Obsoletes 288) (Updates 288)
https://www.garlic.com/~lynn/rfcidx0.htm#235
235 -
Site status, Westheimer E., 1971/09/27 (4pp) (.txt=7994)
(Obsoleted by 240)
... snip ...
based on the stats in the various referenced RFC, the uptime of the early arpanet wasn't very good.
since some number of the machines mentioned in the above RFCs, where
ibm machines ... it should be obvious that SNA wasn't the only
mechanism in use by ibm mainframes. In fact, SNA was primarily a
master/slave terminal control infrastructure (aka "VTAM", virtual
telecommunication access method ... somewhat terminal control
follow-on to TCAM ... telecommunication access method) ... not really
suited for doing peer-to-peer networking operations. and, in fact, SNA
wasn't even announced until 1974:
http://www-03.ibm.com/ibm/history/history/year_1974.html
before that, there were (at least) two early internal network
activities, 1) one was sometimes referred to as "SUN" ... os/360 batch
oriented systems based on HASP (in large part growing out of the HASP
network updates from TUCC) and 2) the work at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
with cp67 based implementation (between cp67 machines). Both of these didn't involve SNA ... and in fact, their origins predate SNA.
The HASP-based network implementation actually suffered some of the same kind of limitations as the arpanet in terms of addressability and (mostly) requiring end-to-end homogeneous software operation of the network nodes; in the arpanet case, it was in the IMPs, while in the HASP case, all the support was in the mainframe, not outboard (the requirement for a separate, managed BBN box might even be considered one of the inhibitors to arpanet growth).
The CP67 origin stuff was much more flexible, having a kind of layered gateway architecture (more akin to later internetworking protocol) ... and when a "HASP" gateway driver was created for CP67 ... the two groups/collections of internal machines then were able to form a common network.
misc. past posts mentioning HASP, JES, HASP/JES networking
implementation limitations, etc
https://www.garlic.com/~lynn/submain.html#hasp
eventually cp67/vm370 based infrastructure came to dominate the internal network (still not having anything to do with sna) ... and in fact was leveraged by the HASP/JES operations to provide format translations between different version/releases (of HASP/JES ... such incompatibilies were known to crash the respective MVT/SVS/MVS operating system, i.e. an intermediate cp67/vm370 node could be required to even allow two different HASP systems to communicate).
misc. old email touching on the internal network
https://www.garlic.com/~lynn/lhwemail.html#vnet
for some completely random topic drift ... the primary person
(associated with dataquest) doing the high-speed interconnect study
for the ha/cmp scale-up activity, mentioned here:
https://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market
in relation to the work mentioned in these old emails
https://www.garlic.com/~lynn/lhwemail.html#medusa
had a decade earlier worked at Santa Teresa lab ... and a decade or so before that, as undergraduate at UCSB, had been hired to do network penetration testing (before the UCSB arpanet connection was activated)
misc. past threads/posts where you've made similar comments on the
subject:
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2004g.html#26 network history
https://www.garlic.com/~lynn/2004g.html#31 network history
https://www.garlic.com/~lynn/2004g.html#32 network history
https://www.garlic.com/~lynn/2004g.html#33 network history
https://www.garlic.com/~lynn/2006j.html#45 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#43 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 21st Century ISA goals? Newsgroups: comp.arch Date: Wed, 11 Apr 2007 09:52:42 -0600Stephen Fuld <S.Fuld@PleaseRemove.att.net> writes:
the 370/158 engine had two sets of microcode that were shared execution on the same processor ... the 370 microcode and the "integrated channel microcode" (that had support for up to six configured channels).
for the "next generation" (after 370), the 303x ... they took the a 158 engine w/o the 370 microcode and packaged it as a independent box "channel director".
A 3031 was a repackaged 370/158 with only the 370 microcode (and w/o the integrated channel microcode) and an external "channel director" (could be considered an two-processor SMP but with the two engines running different microcode).
A 3032 was a repackaged 370/168 that could be configured with one to three external channel directors (for up to 16 channels).
A 3033 started out as 168 wiring diagram mapped to faster chip technology.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Mainframe in 10 Years... Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 11 Apr 2007 10:42:04 -0600abain@ibm-main.lst (Alan Bain) writes:
to older post containing summary of jun90 FORRESTER report "MAINFRAME
R.I.P."
https://www.garlic.com/~lynn/2001n.html#79 a.f.c. history checkup
based on survey in mid-89 and some predictions out thru 99
doesn't sound like a lot has changed except possibly use of virtualization for server consolidation.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 21st Century ISA goals? Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 11 Apr 2007 13:42:18 -0600ChrisQuayle <nospam@devnul.co.uk> writes:
there were some differences ... the '60s flavor allowed that the program and/or data descriptors could be changed on the fly ... so there was a strict requirement for no prefetching (somewhat akin to very strong memory consistency in multiprocessor operation)
the 60s had a lot more I/O capacity than there was real storage ... as a result there was trade-off keeping lots of file infrastructure on disk and use the disk-based infrastructure to control the channel program.
a simple example was ISAM (indexed sequential access method) that would locate/search for particular record on disk (less-than, equal, greater-than, etc) ... then read the record that is to be argument to subsequent locate/search command in the same channel program.
another example was long running channel programs ... a particular channel command word (CCW) could have the PCI flag set (program controlled interrupt) ... which would schedule an interrupt to the processor ... even tho the channel program hadn't completed. This gave processor code change to change subsequent parts of the channel program "on-the-fly" (either arguments and/or instructions).
ISAM (or other implementations) channel programs could get even more complex ... not only changing arguments of subsequent commands ... but also changing the commands themselves (having read something from the device).
the requirement for no prefetching ... in support of possible "on-the-fly" modifications .. placed some distance limitations on operations ... especially later when started looking at longer distance fiber extensions. for instance, even the record locate/search argument was refetched from (processor) memory as each record (on disk) was encountered ... so there some latency issues related to disk rotation.
there was big deal made in 1980 when disk farm max channel cable length restrictions were doubled from 200ft to 400ft.
sometime in the 80s ... larger clusters of processors sharing football sized arrangements of disk farms ... would sometimes start going to 3-d configurations ... because of channel cable distance limitations (related to end-to-end latency restrictions). Start with processors somewhat located in the center of disk farm expanse that possibly had 100yds radius ... and then go to 3-d multiple floor configuration with channel cable length restrictions starting to form an operational sphere.
the storage size vis-a-vis i/o capacity trade-off changed in the 70s
... but you still had customers with configurations that had
multi-cylinder file structure information (well into the 80s ... and
possibly some continuing today). A full-cylinder "search" could take 1/3
sec. elapsed time ... and kept the channel resources dedicated the whole
time because of the requirement to refetch the search argument on each
record compare. some past posts discussiong effects of the
characteristic and change in resource trade-offs w/o changing the
implementation paradigm
https://www.garlic.com/~lynn/submain.html#dasd
the other thing that showed up in the 70s was the 1) increasing configuration size ... so a much higher probability of loaded systems and request queuing and 2) larger processors being built with processor caches.
The asynchronous i/o interrupts could wreck havoc with cache hit ratios. The operating system resource manager that I released in the mid-70s had a hack that would dynamically track the asynchronous i/o interrupts rate ... and at some threshold switch to dispatching tasks disabled for interrupts for short periods. This would slightly delay some i/o interrupts (and the associated processing, increasing i/o processing latency) ... but tended to improve application thruput and their cache hit ratio. It also had some tendency to result in interrupt batching (several i/o interrupts processed in series). This in turn tended to improve the kernel interrupt processing cache hit ratio ... and could even result in the avg. interrupt processing latency to decline.
The other issue with high probability of queued requests would start showing up in "re-drive" latency becoming a measurable factor (i.e. latency between the time a pending i/o interrupt was processed and the next queued request was initiated) ... especially as the favorite son operating system became more and more bloated.
To some somewhat address both issues, a queued initiation/termination interface was introduced in the early 80s with 370-XA (initially on 3081). Channel programs could be scheduled for initiation w/o the resource being available (360 SIO i/o initiation instruction was synchronized and interrogated availability of all resources ... clear out to the device ... prior to channel program initiation and proceeding with next instruction processing). The initiation could also specify that rather than an asynchronous interrupt on completion ... just update a defined control infrastructure.
Actually, 370 (1970) first introduced an intermediate i/o initiation between the 360 SIO and the 370-XA start subchannel ... which was SIOF (sio "fast"). SIOF instruction would hand off the channel program to the channel but w/o waiting for interrogation delay clear out to the device (eliminating the "stall" associated with the SIO instruction).
The 370-XA actual channel program execution still prevented prefetching and could still require the constant access to processor memory ... however initiation and termination of the channel program no longer required synchronized processor execution (eliminating the redrive latency and could be leveraged to minimize the detrimental effects of asynchronous interrupts on cache hit ratios).
current description of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/15.0?SHELF=DZ9ZBK03&DT=20040504121320
and since the operating system no longer was seeing exact operation
initiation/termination ... some of the information gathered for
dynamic resource management, reporting, and capacity planning was
compromised ... so compensating processes and information gathering
had to be added to the i/o subsystem
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/17.0?SHELF=DZ9ZBK03&DT=20040504121320
discussion of some of the make-over that changed from 370:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/F.3.2?SHELF=EZ2HW125&DT=19970613131822
I had first done something along these lines in 1975 when working on a
370 5-way smp machine (that was never announced/shipped). It had
extensive microcode capability ... and I defined both a queued
interface for disk i/o as well as a queued interface for
dispatching/scheduling. Part of this was that it minimized some of
the multiprocessor complexity ... no longer had to worry serializing
which processor was doing a SIO for any specific channel at any
specific time. Treat it purely as a multiprocessor multi-access shared
storage control metaphor ... adding stuff and removing stuff from
queues (microcode could worry about whether target processing unit was
busy ... and would later get around to checking for additional work
... or was idle and had to be signaled to indicate arrival of new
work).
https://www.garlic.com/~lynn/submain.html#bounce
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 21st Century ISA goals? Newsgroups: comp.arch Date: Wed, 11 Apr 2007 14:10:22 -0600nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
as to processor caches ... did the channel i/o processing require full ("multiprocessor") cache consistency support ... for commands, arguments, and data.
i/o had to signal processor caches on all(?) storage alterations ... and (especially store-into) caches had to possibly be interrogated for all command fetches as well all i/o argument fetches (in some cases there was little differentiation between what might be i/o command control argument fetch ... and an i/o command data transfer fetch).
there sometimes have been strategies if it is purely data transfer fetches ... during the i/o operation, unmap the related storage for processor execution. However, if it is really stuff that can be dynamically updated by either i/o transfer or processor during the execution of the i/o operation ... then it requires much tighter synchronization.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 21st Century ISA goals? Newsgroups: comp.arch Date: Wed, 11 Apr 2007 14:28:00 -0600"MitchAlsup" <MitchAlsup@aol.com> writes:
note with the 370-xa change over ... started using (independent) 801 processor chips to handle the extended channel control functions
current i/o overview
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/13.0?SHELF=DZ9ZBK03&DT=20040504121320
i/o instructions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/14.0?SHELF=DZ9ZBK03&DT=20040504121320
basic i/o functions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/15.0?SHELF=DZ9ZBK03&DT=20040504121320
i/o interruptions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/16.0?SHELF=DZ9ZBK03&DT=20040504121320
... including all the additional statistical information ... since the actual sequence of individual events were being masked by the more sophisticated queued interface.
i/o support functions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/17.0?SHELF=DZ9ZBK03&DT=20040504121320
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 21st Century ISA goals? Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 11 Apr 2007 15:10:55 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
other posts in thread:
https://www.garlic.com/~lynn/2007h.html#2 21st Century ISA goals
https://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals?
so a little (long-winded) folklore about redrive latency and the (new)
3880 disk controller (late 70s). as before past posts getting to play in
the disk engineering and product test labs
https://www.garlic.com/~lynn/subtopic.html#disk
the labs were doing all their testing with "stand-alone" machines ... i.e. dedicate, scheduled machine time with simple engineering test monitor software. In the past, they had attempted operation in operating system environment but had experienced 15min MTBF (with corporation's favorite son operating system). I undertook to rewrite an i/o subsystem to make it absolute bullet-proof ... allowing them to do on-demand multiple concurrent testing of engineering hardware.
The disk labs. tending to get newest processors as they became available (processor developers would have the first engineering machine ... and the disk labs would get the 2nd or possibly 3rd engineering machine). As a result, the disk labs had significant processing power ... but it had been devoted to stand-alone testing. When I got i/o subsystem half-way bullet-proof ... they found themselves with operating system environment on their machines ... that had possibly 1-2% processor utilization (even with half-dozen engineering devices being tested concurrently). With all that extra processing power ... they initiated their own online interactive service ... scavenging some spare controllers and disk drives.
The new generation disk controller under development was the 3880 ... it would have more features and also handle the enhanced synchronization (for the 400ft double length channel cables) and the coming ten times faster disk transfer (3mbytes coming with 3380 disk drives compared to the prior 3330 disk drives). The 3880 control processor was a vertical microcode cpu that was much slower than the horizontal microcode processor used in the previous generation 3830 disk controller. To somewhat compensate there was special hardware for data transfer. However, the control operations and command processing was significantly slower on the 3880 (compared to the 3830).
So there was a requirement to show that the 3880 product was within five percent of the performance of the previous 3830 product. The command processing overhead was making the overall operation take much longer time (measure from what the processor saw). So to compensate ... they started doing some hacks ... realizing the redrive latency ... they took to signaling end-of-operation interrupt to the processor ... before the disk controller had actually finished doing all of the processing. At some point, somebody, somewhere ... ran a standard operating system product test suite against a 3880 controller and found test suite thruput to be within five percent of 3830 controller.
Looks good?
So one Monday morning about 10am, i get an upset call from the product test lab asking what I had done to their system over the weekend ... because their interactive service response had gone all to <somewhere> that morning (and, of course, they hadn't done anything over the weekend).
so some amount of investigation, i find that they had replaced a 3830 controller on string of 3330s with a brand-new 3880 controller over the weekend. Turns out that my super enhanced i/o subsystem, also had an extremely short i/o redrive pathlength ... and I was getting around to I/O redrive (after i/o interrupt processing) before the 3880 controller had actually finished completely processing the previous operation. As a result, my I/O redrive was hitting the controller while it was still busy ... which then reflected a busy condition back to the processor. Now the processor had to go into a whole lot of extra processing and requeue the operation until the controller had signaled it was actually finallly not busy. The controller having been hit with an additional operation while it was still busy ... experiences a lot of extra processing ... which included having to signal a new interrupt when it finally was really "free" (having been forced to signal that it was really busy ... even tho it had previously signaled that it "completed" the previous operation, it then had to signal when it really was free).
All this was fairly traumatic, effectively cutting disk i/o operations/sec thruput by at least half under moderate load. So now both the controller people and I have to see about work-arounds for the 3880 i/o redrive latency "problem" (they have to significantly cut their actual busy time that continues on after signaling finished with previous operation ... and/or i have to significantly delay how fast i get around to redriving operations after previous operation had signaled complete).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Mainframe in 10 Years... Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 11 Apr 2007 17:34:09 -0600bdissen@ibm-main.lst (Binyamin Dissen) writes:
other possible posts of interest in the thread:
https://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals?
some other recent posts mentioning ISAM and "self-modifying" channel
programs ... and one of my first assignments after graduation was
spending a week at customer site getting ISAM running in virtual machine
under cp67 (and trying to get dynamic modifications reflected in the
shadow channel program)
https://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction
https://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction
https://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007f.html#4 ISAM and/or self-modifying channel programs
https://www.garlic.com/~lynn/2007f.html#34 Historical curiosity question
previous post in this thread:
https://www.garlic.com/~lynn/2007h.html#2 The Mainframe in 10 Years
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: whiny question: Why won't z/OS support the HMC 3270 emulator Newsgroups: bit.listserv.ibm-main Date: Fri, 13 Apr 2007 05:12:24 -0600alan_altmark@ibm-main.lst (Alan Altmark) writes:
the was "ported" to MVS and made available as a product by doing a vm kernel "diagnose" emulation for MVS (i.e. diagnose instruction use in vm is somewhat analogous to svc instruction in mvs).
some really old folklore ... was that later there was a outside subcontract to implement tcp/ip support in vtam. the initial implementation came back with tcp support significantly faster than lu6.2 support. they were told that everybody knows that lu6.2 is much more efficient than tcp ... and therefor the only way that tcp implementation could be significantly faster than lu6.2 was if it was implemented incorrectly ... and the contract wouldn't be fullfilled unless there was a "correct" tcp implementation.
past post reference
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2006f.html#13 Barbaras (mini-)rant
https://www.garlic.com/~lynn/2006l.html#53 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 21st Century ISA goals? Newsgroups: comp.arch Date: Fri, 13 Apr 2007 06:18:58 -0600"robertwessel2@yahoo.com" <robertwessel2@yahoo.com> writes:
In late 70s I had started commenting that the relative system thruput of disks had been declining significantly (i.e. the processor and memory were getting bigger/faster ... faster than disks were getting faster). By the early 80s, I claimed that over a period of approx. 15 yrs, relative system disk thruput had declined by a factor of ten times.
This upset some in the disk division, and the organization's performance and modeling group was assigned to refute the claims. After a couple weeks they come back and said that I had actually understated the situation (it was actually somewhat worse).
So part of the issue was that the whole channel/controller/disk infrastructure required dedicated "connection" during most of channel program execution. Channel could theoretically executed multiple channel programs at a time ... but only if there was a "solid" channel connection. In 360, provisions were made for stand-alone "seeks" (i.e. disk arm movement) to disconnect from the channel as soon as the cylinder address had been transferred. This allowed for multiple disks to be connected to the same channel and have concurrent arm motion going on.
There was still the issue of disk rotation where no data was actually being transferred ... but the channel/controller were reserved/dedicated. For 370 (3830 controllers and 3330 disk drives), "rotational position sensing" (RPS) was introduced along with the "set sector" channel command. This allowed for a disk channel program to disconnect from the channel while the disk was rotating to the correct position for reading/writing a desired record (allowing other devices to utilize the channel). The problem was that when the rotation got into position, the disk had to "reconnect" ... if the channel was busy, the disk would rotate past the start of the record and would have to have a full, complete rotation to try again. This was called "RPS-miss". My 15 yr period including the transition from 360 to 370 and the introduction of "RPS" and "RPS-miss".
Some rule-of-thumb configuration grew up that channel loading had to be kept to 30percent or less ... in order to minimize RPS-miss (i.e. rotating disks trying to dynamically reconnect to the channel).
So we roll forward to 3880. Not only did I run into problem with
device "redrive" hitting the controller while it was busy
https://www.garlic.com/~lynn/2007h.html#6 21st Century ISA goals?
but I had also done a superfast "dynamic pathing" algorithm purely in software.
Disk controllers supported multiple channel connections ... which could be used to connect to multiple different processor complexes ("CEC") for loosely-coupled (cluster) operation ... and/or connect to multiple different channels for the same CEC (for availability/thruput).
So standard multiple path support (processor complex with multiple
different channels to the same disk controller) tended to be
implemented as a primary with one or more alternates. When I was doing
I/O supervisor rewrite for the engineering and product test labs ...
lots of past posts
https://www.garlic.com/~lynn/subtopic.html#disk
I also did a highly optimized implementation of dynamic pathing with load balancing (as opposed to primary/alternate). However, in the transition from 3830 controller to 3880 controller this ran into another kind of "busy" problem.
Turns out one of the other optimization done in the 3880 controller
microcode (to compensate for the slowness of the processor) was that a
lot of status was cached regarding the channel interface in use. The
3880 thruput and busy was significantly better if operations came in
thru a single (channel) interface. Starting to hit the 3880 randomly
from lots of different channel interfaces ... blew its "caching" and
significantly drove up the controller busy every time it had to switch
from one interface to another. This additional overhead was so
significant ... that primary/alternate strategy had significantly
better thruput than dynamic load-balancing across all (available)
interfaces. misc. past posts mentioning experience redoing the
multi-path support (in software)
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2006v.html#16 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007.html#44 vm/sp1
So one of the other things that 370-xa i/o interface did was move the "dynamic pathing" under the covers ... into what was sometimes called "bump" processing/storage (i.e. new "hardware" function that sat between the kernel drivers and the previous 360/370 channel interface).
Separate from that is the whole continuing saga of the excessive 3880 controller busy overhead ... which spilled over into increased channel busy (since a lot of the increased 3880 processing occurred during dedicated controller/channel handshaking).
The 3090 was built using a few number of TCMs ... each TCM represented
a significant part of the 3090 manufacturing cost. There was a lot of
work on balanced configuration to maximize 3090 thruput ... this
included having sufficient number of disks and channels (at avg. of
30percent busy or less ...harking back to the whole RPS-miss
description). The early 3090 configuration specification was done
effectively using 3830 disk controller characteristics. It eventually
dawned that with the significant increase in channel busy when talking
to 3880 (rather than 3830) ... that 3090 would require a lot more
channels (in order to try and meet the 30percent avg busy threshhold
requirement and minimize contention and problems like RPS-miss). It
turns out in order to add the additional channels, an additional TCM
would have to be used in every 3090. There were some snide remarks
that the "manufacturing cost" of an additional TCM in every 3090
should be billed against the 3880 disk controller organization and not
the 3090 processor organization. misc. past posts mentioning 3880
busy resulting in having to increase number of 3090 TCMs
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2002b.html#3 Microcode? (& index searching)
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
Now processor-side "dynamic pathing" only addressed half of the problem (at least for disks) ... which was find any available channel path to the controller to initiate the operation (although with 3880 disk controller, if the dynamic pathing got too fancy, as i found out with software, it could actually degrade thruput as compared to simpler primary/alternate strategy). However, once started, channel programs were bound to the initiating channel interface.
There was still the possibility of doing dynamic pathing (in the reverse direction) to try and help address the "RPS-miss" situation ... i.e. dynamic path from the controller to channel on "reconnect" when disk had rotated into position ... which would require a lot more processing smarts in the disk controller ... and also a way of indicating to the disk controller ... which of the channel paths were grouped to the same CEC. This was something for later, more efficient disk controller implementations (and more smarts on the processor side to realize that a channel program was reconnecting on a different channel). The channel program and channel commands can stay the same ... the "definition" of channel reconnect (for the controller) changes.
other posts in this thread:
https://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals?
misc. past posts mentioning RPS-miss
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
old posts mentioning making claims about relative system disk thruput
drastically declining over the years
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005f.html#55 What is the "name" of a system?
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006o.html#27 oops
https://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's After Multi-Core?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Fri, 13 Apr 2007 07:08:42 -0600jmfbahciv writes:
I had included a simple statement that internal network was larger than the
arpanet/internet from just about the beginning until possibly mid-85.
https://www.garlic.com/~lynn/2007g.html#84 The Perfect Computer - 36 bits?
there was a post that seemed to imply that there was some question regarding the assertion about the relative sizes of the internal network and the arpanet ... "from just about the beginning" ... and there was some specific names that might possibly contest the assertion.
i then posted reply with some references that gave indication about the
size of the early arpanet (as well as some of the dynamics driving the
internal network implementation) ... of course, lets not reference RFCs
with real specifics
https://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits
and then there is a post seeming to imply that there was a discussion about something other than size.
similar threads have happened before ... from the references to older posts related to the internal network subject (included in previous post) .. things like reference to some comment about specific period in time ... and then response comes back that the discussion was really about some totally different subject, date, and/or place.
for other drift ... some of the ipv6 discussion was about the size
increase in the address field. also a lot of y2k was about legacy
applications and systems that were still around ... and had saved a few
bits in the date by only using two digit year field. some old email
about date processing
https://www.garlic.com/~lynn/99.html#233 Computer of the century
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/2000.html#0 2000 = millennium?
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...
https://www.garlic.com/~lynn/2006r.html#16 Was FORTRAN buggy?
In the reference to some of the HASP networking implementation (i believe originating at TUCC) ... they had leveraged a one-byte field that had been used to define "psuedo" unit record devices. This was also used to support a lot of telecommunication unit record devices (card readers, printer, punches at remote locations). This telecommunication support was deployed and used in large number of customer shops (i.e. lots of customer sites with single processor supporting one or more remote sites over telecommunication lines).
The incremental enhancement was then to take that support and extend it to talk to other HASP systems. As a result, the single one-byte field was then being used to "address" psuedo-devices, remote telecommunication devices, as well as other hosts in the same network. This was a particular problem for most customers, since they tended to only have a limited number of hosts ... and the issues of cross-domain (cross corporate) interconnect was still quite a significant issue.
However, there was (at least) one company with hundreds and then thousands of mainframes installed for internal use ... where inter-corporate jurisdictional issues wouldn't inhibit interconnecting processors.
As previously mentioned, the HASP implementation had short comings where different versions of HASP (& then JES) couldn't interoperate ... and could require VNET intermidiate node to do format conversion (as countermeasure to prevent format incompatibilities resulting in whole system crashes ... shudder to think about what a hostile operational environment could do and things like denial of service attacks).
However, the other HASP implementation issue was that since all definitions had to identified by that single byte ... 255 max. possible that included all psuedo devices ... a large HASP configuration could have 60-80 definitions and possibly several remote telecommunications devices ... the number of network node definitions might be as few as 150. This wasn't a problem for most closed corporate environments of the period ... but there was at least one where it was a significant problem. Also, it wasn't unusual for a corporation to keep all its HASP systems at the same version ... doing synchronized upgrades across a limited number of machines. However, synchronized upgrades doesn't scale well as the number of nodes increase significantly.
The saving grace was the implementation originated at the science center
for cp67.
https://www.garlic.com/~lynn/subtopic.html#545tech
Not only could it be used to handle the HASP/JES version
interoperability problem ... but it didn't have the addressing
limitation and could address the complete network. HASP/JES nodes then
tended to migrate to boundaries ... with configuration definition that
could only address some specific 100-200 node subset of the complete
network.
https://www.garlic.com/~lynn/subnetwork.html#internalnet
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Fri, 13 Apr 2007 08:07:00 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
this also appeared to be a similar issue with the arpanet BBN box implementation ... in some of the past posts I've referenced RFCs with box downtime schedules across the whole arpanet where maintenance and software upgrades needed to be done and some of the software changes appeared to require coordinated maintenance across the whole infrastructure.
This is not only an network interoperability issue between different boxes ... for the arpanet, there was none of these kinds of interoperability issues since there was only one kind of box ... but also the interoperability issue of different software versions. If you keep all the networking software the same (both the boxes and the versions) ... then interoperability (homogenous/heterogeneous) issues can be eliminated ... although you still can have significant scaling issues if you have to keep the software version of all the boxes coordinated.
Supporting interoperability and eliminating the coordinated, homogeneous infrastructure operations ... helps with scaling ... since you no longer have to worry about keeping all boxes in coordinated sync. at all times.
From an operational standpoint ... different implementations from different organizations ... all being able to interoperate was something of a mid-80s happening for the arpanet/internet. The internal network faced it very early since
1) the cp67 and hasp implementations were totally different and came from totally distinct background and organizations (in fact, a lot of the early hasp networking base implementation even originating outside the company).
2) both cp67 and hasp implementations were part of the mainframe software (not a separate box). the individual datacenters around the world controlled the maintenance, support, and release/version transition schedule of the mainframe software in their datacenters ... and might have very little coordination with the rest of the world. As a result there was a wide variation in the release/version of the different software being run around the world (there wasn't the luxury of a separate box that could have centralized, coordinated support). the base interoperable orientation of the networking software started at the science center eliminated needing to have coordinated, centralized support for world-wide operation (not only for the cp67/vm370 systems, but also for the hasp/jes systems). This was critical for the network size scalling.
re:
https://www.garlic.com/~lynn/subnetwork.html#internal
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Fri, 13 Apr 2007 12:46:01 -0600jmfbahciv writes:
and the time we were out pitching 3-tier architecture and middle
ware/layer
https://www.garlic.com/~lynn/subnetwork.html#3tier
my wife did short stint as chief architect for Amadeus. she was backing x.25 for their world-wide operation ... but the SNA forces were instrumental in getting her replaced. it didn't do them much good, Amadeus went with x.25 anyway.
misc. past posts mentioning Amadeus
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
https://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006y.html#14 Why so little parallelism?
https://www.garlic.com/~lynn/2007d.html#19 Pennsylvania Railroad ticket fax service
https://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Question on DASD Hardware Newsgroups: bit.listserv.ibm-main Date: Fri, 13 Apr 2007 13:34:06 -0600starsoul writes:
in the million-plus hrs.
however, there has been some recent articles on how accurate published numbers might really be (and/or what the distribution actually works out to be)
Hard disk test 'surprises' Google
http://news.bbc.co.uk/2/hi/technology/6376021.stm
Google Releases Paper on Disk Reliability
http://hardware.slashdot.org/hardware/07/02/18/0420247.shtml
Failure Trends in a Large Disk Drive Population
http://labs.google.com/papers/disk_failures.pdf
there have been articles in the past about disk MTBF can be highly skewed (some very early ... and then very late ... as opposed to any sort of even or random distribution).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: conformance Newsgroups: alt.religion.kibology,alt.folklore.computers Date: Fri, 13 Apr 2007 14:41:09 -0600Glenn Knickerbocker <NotR@bestweb.net> writes:
this has somewhat been discussed in some of the buffer overflow threads
... about using "in-band" NULL characters as indicating end-of-line (and
therefor implicitly indicating line length) ... as opposed to recfm=V
used explicit (out-of-band infrastructure metadata) field for line
length.
https://www.garlic.com/~lynn/subintegrity.html#overflow
the various terminal/wire characters CR/LF (carriage-return and/or line-fee) are terminal "control" constructs.
CMS deals with "virtual" 1052-7 (old style 360 machine console) for line-mode terminal (with some special stuff for "full-screen" 3270). CR/LF characters then get mapped into 1052-7 equivalent ... and typically CMS would parse the "incoming" emulated terminal line/wire and strip out terminal control characters.
The recfm=F, lrecl=80 file format is obviously inheritance from physical card format. terminal/wire lines would typically/frequently get mapped into recfm=V file format.
discussion of implementation of a POSIX compliant file system
"known" as the BYTE FILE SYSTEM (BFS) for CMS
http://www.redbooks.ibm.com/abstracts/SG244747.html?Open
reference to CMS now having support for (traditional
mainframe) RECFM=F, RECFM=V, RECFM=U, and also RECFM=D
("ascii variable length records")
http://www.vm.ibm.com/pubs/cms440/TVISECT.HTML
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: asymmetric cryptography + digital signature Newsgroups: sci.crypt Date: Fri, 13 Apr 2007 15:12:54 -0600"Giorgio" <nacci.giorgio@gmail.com> writes:
in some cases, the cleartext is digitally signed (first) ... in an attempt to imply that the digital signature is also associated with the meaning of the cleartext (as opposed to simply providing integrity and authentication) ... and/or the cleartext already carries a digital signature as means of integrity/authentication ... independent of whether it is going to be transmitted.
and in reality ... (for efficiency) many infrastructures actually generate a random symmetric key ... encrypt the message with the symmetric key and then encrypt the symmetric key with the recipients public key. then, in the case of email to multiple recipients ... all you have to do is encrypt the symmetric key with the public key of each of the recipients (as opposed to having a separately encrypted copy of the full message for each recipient).
on the recipient's side ... if the digital signature is on the cleartext ... then it is possible for the recipient to keep the unencrypted/cleartext message along with the digital signature for longterm integrity/authentication.
If the digital signature is on the encrypted message ... if future/longterm authentication/integrity (of the content) is needed ... then the full encrypted message has to be also retained. Then to have ongoing high assurance as to the authentication/integrity, could require the digital signature (of the encrypted message) verified on each use ... followed by message decryption (compared to just having to reverify the digital signature of the cleartext message).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: conformance Newsgroups: alt.religion.kibology,alt.folklore.computers Date: Fri, 13 Apr 2007 17:03:25 -0600ArarghMail704NOSPAM writes:
yep, oh well, brain check ... even when i had done the tty/ascii
terminal support for cp67 in the 60s when i was an undergraduate ... the
subsequent problem mentioned in these posts, involved "one byte" length
arithmetic.
https://www.garlic.com/~lynn/2007g.html#37 The Perfect Computer - 36 bits?
also mentioned in the stories here
https://www.multicians.org/thvv/360-67.html
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: MIPS and RISC Newsgroups: comp.arch Date: Fri, 13 Apr 2007 21:03:24 -0600"MitchAlsup" <MitchAlsup@aol.com> writes:
even some old email
https://www.garlic.com/~lynn/lhwemail.html#801
maybe 2nd generation was various iliad
note here on john:
http://domino.watson.ibm.com/comm/pr.nsf/pages/news.20020717_cocke.html
801 wiki page:
https://en.wikipedia.org/wiki/IBM_801
i've periodically claimed that (at least some) motivation behind 801 was
to go to the opposite extreme from the extreme complexity of the
(failed/canceled) Future System project
https://www.garlic.com/~lynn/submain.html#futuresys
somewhat after 370 fort knox activity in endicott was killed, there were some number of engineers that had worked on 801 efforts ... show up at other companies ... amd (29k), hp (snake). there is folklore that one of prime people showup on snake had given 2weeks noticed ... and then spent the last two weeks on blue iliad.
somewhat separate from the 801 iliad stuff was 801 romp ... which started out as joint effort between research and office products for 801-based (cp.r, pl.8, etc) displaywriter follow-on. when that was killed they somewhat looked around and decided to turn it into a unix workstation ... which was released as pc/rt. Then work started on rios chipset (i've got a rios chipset paperweight that says 150 million ops, 60 million flops, 7 million transisters) ... which come out as "power" and rs/6000.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sat, 14 Apr 2007 11:41:00 -0600jmfbahciv writes:
post also mentions that by the end of '76, 16percent of the burlington mall dev. group were working for DEC ... aka result of POK getting approval to kill off vm370/cms product, shutdown burlington location, needing to move all the people to POK to support MVS/XA development.
using search engine for decnet and wecker turns up lots of references
... include some mentioning Wecker as originator of DECnet (need
login for following)
http://ieeexplore.ieee.org/iel1/35/4759/x0321428.pdf
others just say he was one of the architects of DECnet.
misc. recent posts
https://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#10 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#11 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#12 The Perfect Computer - 36 bits?
for other drift ... by the early 80s, the descendent of the network
support originated at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
was starting to just ship the jes2 "gateway" drivers ... and stoped
shipping any of the native drivers ... among other things, the native
drivers performed/operated significantly better than the jes2 driver
... corporate decision to minimize comparisons(?). It was in this era
that saw the start of bitnet using the product ... lots of past posts
mentioning bitnet (and/or earn, its eurpean counterpart)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
some old email from the person responsible for EARN in Europe
https://www.garlic.com/~lynn/2001h.html#email840320
https://www.garlic.com/~lynn/2006w.html#email850607
Also, slightly later in this era, the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
passed 1000 nodes (and the internal network continued to utilize
"native" drivers)
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2006k.html#3 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
before jes2 revamp to support up to 1000 nodes (from its implementation
based on HASP 1byte index table that had to be shared with several other
functions) ... and later the internal network passed 2000 nodes before
jes2 upgrade to support 1999 nodes. lots of past hasp/jes2 posts
https://www.garlic.com/~lynn/submain.html#hasp
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Working while young Newsgroups: alt.folklore.computers Date: Sat, 14 Apr 2007 12:00:13 -0600Car keys could go the way of tail fins
i learned to drive on an old flatbed truck the summer i turned nine. it had a pedal on the floor used to engage the starter motor (and all shifting was double clutch)
https://www.garlic.com/~lynn/38yellow.jpg
past posts:
https://www.garlic.com/~lynn/2002i.html#59 wrt code first, document later
https://www.garlic.com/~lynn/2004c.html#41 If there had been no MS-DOS
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sat, 14 Apr 2007 14:20:02 -0600Morten Reistad <first@last.name> writes:
later when we were consulting for this small client/server startup that
wanted to do payments on their server ... they had this technology
called SSL that required digital certificates.
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
we also coined the term "comfort certificates" ... described in some of the referenced posts (a lot of them existing to make you feel better).
We had to do a lot of auditing of various businesses associated with the
digital certificate stuff ... somewhat as a result, we coined the term
certificate manufacturing to differentiate from the stuff called PKI
that was normally found in the literature associated with x.509
https://www.garlic.com/~lynn/subpubkey.html#manufacture
the other issue was that by the mid-90s a lot of institutions were
starting to realize that the earilier "identity" x.509 work (typically
overloaded with a lot of personal information) represented significant
privacy and liability issues. as a result there was quite a bit of
retrenchment to what was called relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo
however, it was normally trivial to demonstrate that rpo-certificates
were redundant and superfluous. that nearly everything that could be
done with rpo-certificates could be achieved with much simpler
infrastructure, still involving digital signatures ... but w/o the
enormous expense and trouble of digital certificates
https://www.garlic.com/~lynn/subpubkey.html#certless
old email suggesting/describing a simple certificate-less public key
operation
https://www.garlic.com/~lynn/2006w.html#email810515
some recent posts about vulnerabilities related to the existing
SSL operation
https://www.garlic.com/~lynn/2007f.html#31 Is that secure : <form action="https" from a local HTML page ?
https://www.garlic.com/~lynn/aadsm26.htm#47 SSL MITM-attacks make the news
https://www.garlic.com/~lynn/aadsm26.htm#56 Threatwatch: MITB spotted: MITM over SSL from within the browser
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: asymmetric cryptography + digital signature Newsgroups: sci.crypt Date: Sat, 14 Apr 2007 14:36:16 -0600"Giorgio" <nacci.giorgio@gmail.com> writes:
built into every browser is a whole lot of clear-text messages containing public keys along with digital signatures ... which happen to be called "digital certificates". these are the things nearly all SSL operates from. if there are significant vulnerabilities as you describe ... then the whole SSL infrastructure might come crashing down.
previous post:
https://www.garlic.com/~lynn/2007h.html#15 asymmetric cryptography + digital signature
unrelated recent message ... that happens to mention different kind
of vulnerability with ssl infrastructure
https://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 09:46:08 -0600Morten Reistad <first@last.name> writes:
the other issue is that digital certificates were a offline paradigm ... electronic emulation of credentials, certificates, licenses, etc ... essentially analogous to the letters of credit/introduction from the sailing ship (and earlier) days. they are useful to relying parties that have no other information about the entity.
the x.509 identity digital certificates were billed as security feature ... which requires quite a bit of funding to cover the costs of audit, compliance, etc. however, in an online world, real-time, online information is significantly more valuable to relying parties ... than stale, static certificates.
besides the privacy and liability issues with identity digital certificates grossly overloaded with personal information ... relying parties having growing access to (the much more valuable) online, real-time information ... starts to move digital certificates to the no-value market segment (i.e. applications that can't justify cost of online, real-time information). The problem then becames difficulty of justifying high prices in the no-value market segment ... and w/o a lot of revenue flow ... it is difficult to cover costs of stringent security features, audits, compliance, etc.
so another aspect is that the whole digital certificate paradigm was targeted at a rapidly disappearing market segment ... with expanding, online, ubiquitous connectivity.
we were called into consult with the small client/server startup
that had ssl ... that wanted to do payment transaction on servers
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
was is now frequently referred to as electronic commerce. part of
the issue is the ssl domain name digital certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
are being payed for by merchants ... supposedly in support of this electronic commerce stuff. the value of certificate is bounded by the fact that the merchants are already paying a significant amount on every transaction for real-time infrastructure ... and, in fact, had been for a couple decades prior to the appearance of SSL.
In this period there were some number of x.509 advocates making statements about digital certificates being required to bring payment infrastructure into the "modern" era. We would respond that the offline paradigm offered by digital certificates actually reverts the electronic payment infrastructure several decades. It was not too long after that, there was work started on OCSP (online certificate status protocol) ... which is another rube golberg type hack ... (along with relying-party-only certificates) that attempts to demonstrate an online infrastructure but attempting to preserve the fiction of (stale, static, redundant and superfluous) certificates serving a useful purpose in an online world.
of course ... my oft reference old post about security proportional
to risk
https://www.garlic.com/~lynn/2001h.html#61
and more recent posts concerning "armored" payments and transactions
... i.e. providing end-to-end strong authentication and integrity
https://www.garlic.com/~lynn/x959.html#x959
... but w/o enormous payload and processing bloat
https://www.garlic.com/~lynn/subpubkey.html#bloat
i.e. the payment protocols from the period that demonstrated appending digital certificates to payment transactions ... were sending (stale, static) information back to the relying party, when the relying party already had the real-time copy of the information (i.e. redundant and superfluous) ... however, these stale, static, redundant and superfluous digital certificates were increasing the payload transaction size and processing by two orders of magnitude.
it was somewhat out of this experience that we did the certificate-less
https://www.garlic.com/~lynn/subpubkey.html#certless
digitally signed financial transaction standard
https://www.garlic.com/~lynn/x959.html#x959
i.e. can have end-to-end integrity and authentication without having the enormous bloat of stale, static, redundant and superfluous appended digital certificates.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: MIPS and RISC Newsgroups: comp.arch,alt.folklore.computers Date: Sun, 15 Apr 2007 10:00:05 -0600re:
for totally other topic drift, Varian was running early chip design applications on cp67/cms. Later you find some of the influence (and even engineers) at other places around the valley like AMD, LSI Logic, etc. ... and chip design applications still running on vm370/cms (cp67/cms follow-on) well thru the 80s.
so separate from regularly visiting various of these places in the 70s and 80s ... related to chip technology ... also had lots of interactions about their vm370 operations.
and old email that happens to mention amd 29k
Date: Wed, 28 Sep 1988 13:42:59 PDT
From: wheeler
.... ....
Our icharts show that the risc industry is doubling processing power
every 18-24 months. Given that AMD introduced the 29000 in '87, then the
window for a 2x 29000 opens sometime in early '89. The 29000 has been
benchmarked at 40+k drystones (making a 2x 29000 85k-90k drystones). I
believe that the current numbers are approx:
original rt: 4k drystones 135: 12k drystones ps2m70 13k drystones 29000 40k+ drystonesRefed: **, - **, - **, - **, - **... snip .... top of post, old email indexof course, CISC processors then started to move onto similar technology curve.
sizeof() was: The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 10:34:40 -0600jmfbahciv writes:
all large corporations seemed to have their equivalent ... witness
the future system stuff
https://www.garlic.com/~lynn/submain.html#futuresys
in the early 80s ... i had somewhat precipitated a new operating system rewrite project ... I had layed out a bunch of objectives ... programming technology, implementation language (some of the things being observed about the portability that unix was leveraging in the market), etc.
this quickly got adopted and balooned into a fairly massive effert (with lots of people wanting to take advantage of the opportunity and get their favorite feature included ... a somewhat smaller scale repeat of what happened with future system project). one of my original objectives about small, highly focused group of individuals doing implementation got lost. just before the whole effort imploded ... there was something like a couple hundred people working on writing specifications.
recent post
https://www.garlic.com/~lynn/2007g.html#69 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007g.html#70 The Perfect Computer - 36 bits?
of course something similar could be said about the workings of
large bureaucratic organization in taking us out of the NSFNET backbone
picture and attempting to substitute SNA (despite all the best efforts
of NSF)
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
or our experience with MEDUSA, cluster-in-a-box
https://www.garlic.com/~lynn/lhwemail.html#medusa
and there used to be a joke that product announcements required nearly 500 different executive approvals.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 11:28:28 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
for other drift ... the corporation periodically would release comments about encouraging wild ducks ... however somebody once wrote a byline for one of the series as long as they fly in formation. the other scenario was about encouraging people to self-select ... so they would have a list of people that needed to be dealt with.
post about being able to tell the people out in front by the arrows in their back
https://www.garlic.com/~lynn/2007f.html#41 time spent/day on a computer
and other related comments
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
https://www.garlic.com/~lynn/2007f.html#29 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007f.html#30 The Perfect Computer - 36 bits?
and past post mentioning wild ducks:
https://www.garlic.com/~lynn/2007b.html#38 'Innovation' and other crimes
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 15:52:15 -0600Morten Reistad <first@last.name> writes:
Your don't necessarily need "first order" digital certificates ... which are paradigm targeted at relying parties that have no other access to the information ... direct, online, real-time access to the authoritative agency responsible for the information is also a possible solution. This has been the process in place for electronic payments for decades ... the quality of the information for the merchant is sufficiently more worthwhile since not only do they get a real-time response about (supposed) validity of the consumer ... but also real-time response about real-time aggregated information (like credit limit) ... which isn't possible in a stale, static, offline paradigm.
there were 3-4 separate scenarios ... all involving the authoritative agencies responsible for the information that certification authorities are supposedly using for the validity of the information being certified which are then represented in the (stale, static, redundant and superfluous) digital certificates.
so with respect to certifications of merchant servers accepting payments, we initially suggested that the merchant financial institutions that already certify and take financial responsibility for ... as part of sponsoring them to participate in electronic transactions ... should just issue digital certificates for the merchants that they already certify. but that turns out to be redundant and superfluous since the merchant financial institutions have already been doing such oeprations for decades as part of real-time electronic payments.
so with respect to domain name certificates ... as to the certification
that the applicant for the domain name certificate is really the owner
of the domain name ... SSL domain name certificates were originally
intended as 1) encryption/hiding information in transit, 2)
countermeasure to website impersonation, ip-address take-over,
man-in-the-middle attacks, etc
https://www.garlic.com/~lynn/subpubkey.html#sslcert
as well as various things related to integrity issues with the domain
name infrastructure. however, the process involves the domain name
infrastructure as the authoritative agency as the source of the
information that all the certification authorities rely on as to the
real domain name owner (certification authorities have to check with the
domain name infrastructure as to the true owner of the domain name
... when they are processing an application for domain name
certificate). Now there have been some proposals that improve the
integrity of the domain name infrastructure ... even backed by the
certification authority industry (since the validity of domain name
digital certificates tracks back to the integrity of the domain name
infrastructure as the source). However, this represents something
of a catch-22 for the certification authority industry since
a major original justification for domain name digital certificates
was because of domain name infrastructure integrity issues. Fixing
those integrity issues reduces the justification for domain name
digital certificates ... lots of past posts discussing this issue
https://www.garlic.com/~lynn/subpubkey.html#catch22
Part of the improvements involve having domain name owner register a public key when they register their domain name (minimizing various forms of domain name hijacking on other vulnerabilities by requiring communication from the domain name owner be digitally signed and then verified with the onfile public key ... note certificate-less operation).
Now there is additional opportunity for the certification authority industry. The current process has a domain name digital certificate applicant supply a lot of identification information with the application. Then the certification authority has an expensive, time-consuming, and error-prone process of matching the supplied identification information with the identification information on file with the domain name infrastructure.
The certification authority can start (also) requiring that domain name digital certificate applications be also be digitally signed by the domain name owner. Then the certification authority can replace an expensive, time-consuming and error prone identification process with a much less expensive, simpler, and much more reliable authentication ... by doing a real-time retrieval of the onfile public key (from the domain name infrastructure) to validate the digital signature on the domain name infrastructure.
The additional catch-22 for the certification authority industry (in addition to eliminating a lot of the original reason for their existance) ... is if they can do (certificate-less) real-time retrievals of public key ... then the possibility exists that the rest of the world could also. Rather than having all the digital certificate originated protocol chatter as part of SSL session setup ... the client can get the (valid) public key piggybacked from the domain name infrastructure in the transaction that responds with the domain name to ip-address. The client then just generates the random (symmetric) session encryption key, encrypts the message, encrypts the session key with the returned public key ... and sends the whole thing off to the server in a single message transmission. Is is then theoretically possible to have an SSL exchange in single transmisson round trip.
In a online world ... it is theoretically possible to have direct real-time "first order" information from the authoritative agency and/or financially responsible institution, information that is significantly more valuable than what you could get from stale, static, redundant and superfluous digital certificate.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 16:42:38 -0600Morten Reistad <first@last.name> writes:
part of the issue ... which side-tracked the whole process with extremely complex and costly processing was much of the digital certificate stuff (providing little fundamentally added value).
fundamentally, digital signatures and asymmetrical key technology can provide for end-to-end integrity and authentication.
the digital certificate stuff is an electronic analog of credentials, certificates, licenses, and/or similar to the letters of credit/introduction from the sailing ship days (and earlier) ... aka mechanism for trusted distribution of information for relying parties that otherwise had no access to the information (the two parties were anonomous strangers, having no prior interaction with each other ... and had no resource to direct interaction with an authoritative agency with regard to information about the other party).
I've mentioned before that in the mid-90s, the certification authority industry appeared to take to wallstreet the prospect of a $20b/annum business ... i.e. the financial institutions would underwrite the cost of $100/person/annum digital certificate for their customers (i.e. approx. 200m).
There was one scenario where a large financial institution was told that they could transmit their customer masterfile and the certification authority would reformat the bits and return them something called a digital certificate (for every record in the institution's customer masterfile) ... and the price would only be $100/account ... and oh, by the way, this would have to be repeated every year.
The financial institution could then distribute this digital certificates to their customers ... and then for all future electronic communication/transactions, the customer would digitally sign electronic communication/transactions, and send off the communication/transaction, the digital signature, and the appended digital certificate to the financial institution. The financial instutition would retrieve the associated account record ... and use the onfile public key to verify the digital signature. Since the financial institution had the original realtime version of the information, there was never a situation when it would be necessary to refer to the stale, static, redundant and superfluous digital certificate. So the financial institution that had between 10m-20m such accounts ... came to the realization that there was no justification for a $1b-$2b annual transfer of wealth to the certification authority.
We dropped into the institution and visited the people (that had already spent $50m on a pilot) just after the board became aware of the $1b-$2b/annum ongoing transfer of wealth requirement (and the responsible people had been advised that they should start looking for opportunities elsewhere).
These sorts of realization sort of tanked the $20b/annum business case for the industry that had been floating around wallstreet.
misc. past posts mentioning the $20b/annum business case scenario
https://www.garlic.com/~lynn/aadsm7.htm#rhose4 Rubber hose attack
https://www.garlic.com/~lynn/aadsm14.htm#29 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aadsm18.htm#52 A cool demo of how to spoof sites (also shows how TrustBar preventsthis...)
https://www.garlic.com/~lynn/aadsm23.htm#29 JIBC April 2006 - "Security Revisionism"
https://www.garlic.com/~lynn/2005i.html#36 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005j.html#32 IBM Plugs Big Iron to the College Crowd
so the next approach was attempting to get governments to pass
legislation mandating digital certificates as part of all electronic
signature operations. we ran into this when we were asked in to help
word smith the cal. state electronic signature legislation (and later
the federal electronic signature legislation). One of the big issues
was that in attempting to equate digital signatures with human
signatures ... the laywers pointed out they (certification authority
industry) had left out the part of human signatures related to
intent as part of the generating the signature ... or that the
person had read, understood, agrees, approves, and/or
authorizes what is being signed. lots of past post on the
difference between digital/electronic signatures and the issue of
showing intent
https://www.garlic.com/~lynn/subpubkey.html#signature
part of this was periodically attributed to attempts to take advantage of possible semantic confusion since both terms "digital signature" and "human signature", contain the word "signature" ... even tho, otherwise they are totally unrelated.
past posts mentioning x9.59 financial standard protocol providing
end-to-end integrity and authentication
https://www.garlic.com/~lynn/x959.html#x959
w/o the enormous added payload and processing bloat of requiring
appending digital certificates
https://www.garlic.com/~lynn/subpubkey.html#bloat
lots of related posts in this n.g. in the long running thread
nhttps://www.garlic.com/~lynn/2006y.html#7 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2006y.html#8 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#0 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#6 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#27 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#28 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007b.html#60 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007b.html#61 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007b.html#62 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007b.html#64 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#6 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#8 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#10 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#15 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#17 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#18 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#22 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#26 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#27 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#28 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#30 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#31 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#32 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#33 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#35 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#36 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#37 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#38 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#39 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#43 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#44 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#46 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#51 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#52 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#53 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007d.html#0 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007d.html#5 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007d.html#11 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007d.html#26 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007d.html#68 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007d.html#70 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#2 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#12 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#20 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#23 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#24 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#26 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#28 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#58 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#61 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#62 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007e.html#65 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007f.html#8 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007f.html#58 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007f.html#68 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007f.html#72 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007f.html#75 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007g.html#8 Securing financial transactions a high priority for 2007
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 17:22:35 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
there was a similar but different foray in this period to try and get merchants to underwrite consumer (x.509 identity) digital certificates ... since the financial institutions had declined the privilege and it looked like getting the gov. to mandate (that consumers would have to pay for their own) might not succeed.
now the merchants are already paying quite a bit on a per transaction basis ... and this would have further increased those payments. The offer basically implied that transactions that were consumer digitally signed and carrying the consumer's digital certificate ... might/could reverse the burden of proof. the current scenario in a merchant/consumer dispute ... puts the burden of proof on the merchant. If the burden of proof were to be reversed ... that met that in a merchant/consumer dispute ... the burden of proof would be on the consumer (enormous savings for merchants in disputes).
(unfortunately?) A few raised the question that if this actually came to pass ... why would a consumer ever voluntarily want to digitally sign anything?
This possibly is similar, but different to recent some comments about
changes in the UK:
Card victims told 'don't call police'
http://www.thisismoney.co.uk/credit-and-loans/idfraud/article.html?in_article_id=418947&in_page_id=159
Concern over new fraud reporting
http://news.bbc.co.uk/1/hi/programmes/moneybox/6513835.stm
New rules to report fraud announced
http://www.moneyexpert.com/News/Credit-Card/18106248/New-rules-to-report-fraud-announced.aspx
Apacs: Report credit card fraud direct to bank
http://www.fairinvestment.co.uk/credit_cards-news-Apacs:-Report-credit-card-fraud-direct-to-bank-18107160.html
Anger at card fraud reporting changes - Law & Policy
http://management.silicon.com/government/0,39024677,39166633,00.htm
Banks charging to the top of the hate parade
http://edinburghnews.scotsman.com/opinion.cfm?id=508912007
Warning Over Purge On Credit Card Fraud
http://www.eveningtimes.co.uk/news/display.var.1303206.0.warning_over_purge_on_credit_card_fraud.php
Anger at card fraud reporting changes
http://www.silicon.com/financialservices/0,3800010322,39166633,00.htm
Financial institutions to report on card fraud
http://www.gaapweb.com/news/135-Financial-institutions-to-report-on-card-fraud.html
UK Tells Consumers To Report Financial Fraud to Their Banks
http://www.paymentsnews.com/2007/04/uk_tells_consum.html
Financial institutions to be first point of contact for reporting
banking crime
http://www.cbronline.com/article_news.asp?guid=DE47801B-AE60-4073-8314-26AC46AC7C03
Card Fraud Changes 'Will Not Adversely Affect Police Response'
http://www.fool.co.uk/news/your-money/credit-cards/2007/04/11/card_fraud_changes_will_not_adversely_affect_polic.aspx
and related blog entry:
http://www.lightbluetouchpaper.org/2007/02/08/financial-ombudsman-on-chip-pin-infallibility/
and for other drift ... some past posts mentioning some possible
vulnerabilities in various chip deployments
https://www.garlic.com/~lynn/subintegrity.html#yescard
including a quote of somebody quipping about having spent billions of dollars to prove that chips are less secure than magstripe.
for completely other drift ... a few past "interchange" fee references
https://www.garlic.com/~lynn/aadsm23.htm#37 3 of the big 4 - all doing payment systems
https://www.garlic.com/~lynn/aadsm26.htm#1 Extended Validation - setting the minimum liability, the CA trap, the market in browser governance
https://www.garlic.com/~lynn/aadsm26.htm#25 EV - what was the reason, again?
https://www.garlic.com/~lynn/aadsm26.htm#34 Failure of PKI in messaging
https://www.garlic.com/~lynn/aadsm7.htm#rhose3 Rubber hose attack
https://www.garlic.com/~lynn/2005u.html#16 AMD to leave x86 behind?
https://www.garlic.com/~lynn/2006k.html#23 Value of an old IBM PS/2 CL57 SX Laptop
https://www.garlic.com/~lynn/2007.html#27 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#38 Securing financial transactions a high priority for 2007
and misc. past posts mentioning burden of proof issue (in disputes):
https://www.garlic.com/~lynn/aadsm6.htm#nonreput Sender and receiver non-repudiation
https://www.garlic.com/~lynn/aadsm6.htm#terror7 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aepay10.htm#72 Invisible Ink, E-signatures slow to broadly catch on
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#55 MD5 collision in X509 certificates
https://www.garlic.com/~lynn/aadsm19.htm#33 Digital signatures have a big problem with meaning
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm21.htm#35 [Clips] Banks Seek Better Online-Security Tools
https://www.garlic.com/~lynn/aadsm23.htm#14 Shifting the Burden - legal tactics from the contracts world
https://www.garlic.com/~lynn/aadsm23.htm#33 Chip-and-Pin terminals were replaced by "repairworkers"?
https://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2000g.html#34 does CA need the proof of acceptance of key binding ?
https://www.garlic.com/~lynn/2001g.html#59 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001l.html#52 Security standards for banks and other institution
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005e.html#41 xml-security vs. native security
https://www.garlic.com/~lynn/2005m.html#6 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005o.html#26 How good is TEA, REALLY?
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2006d.html#32 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006e.html#8 Beginner's Pubkey Crypto Question
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 20:50:05 -0600Andrew Swallow <am.swallow@btopenworld.com> writes:
in past posts there has been passing references to failures can spawn
significant promotions and empire building ... there was a recent
corollary mentioned in risk digest article, nothing succeeds like failure:
http://catless.ncl.ac.uk/Risks/24.62.html
in the past, the reference was comparing the 12 people developing cp67
for the 360/67 and eventually something like 1200 people developing
tss/360 for the 360/67; some comments are that as the larger group
appeared to be unable to deal with one problem or another ... the
solution was to significantly increase the size of the organization
(with sufficiently large organization any problem can be solved). so
there is some significant incentive to not solve problems simply
... because there is always the chance that having difficulty in solving
a problem will result in significant empire building. old posts
mentioning the difference between the 12 working on cp67 and the
1200 working on tss/360.
https://www.garlic.com/~lynn/2002d.html#23 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#36 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003g.html#24 UltraSPARC-IIIi
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2006q.html#32 Very slow booting and running and brain-dead OS's?
another (datacenter) corollary is that it can be counterproductive to always anticipate all problems and make sure they are averted ... because then it promotes the belief that there hasn't been hard problems and the job providing datacenter service wasn't really a diffiicult, hard problem.
as in other recent posts ... the Boyd scenario is that some of this
orginates as the result of training given young officiers in WW2
... i.e. assumption that there is massive numbers of inexperienced
people and therefor it required strongly enforced command & control
infrastructure to leverage the very few people that knew what they were
doing ... and they directed the movements of massive numbers of
inexperienced people. recent post
https://www.garlic.com/~lynn/2007e.html#45 time spent/day on a computer
somewhat bleed over from some other posts in this thread:
https://www.garlic.com/~lynn/2007h.html#27 sizeof() was: The Perfect Computer - 36 bits?
there is some line somewhere that KISS can actually be much more harder/difficult than an extremely complex solution ... or it is done when there is nothing left to remove (as opposed to it being done when there is nothing left to add).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 21:43:36 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
and for a different kind of topic drift
https://www.garlic.com/~lynn/2007g.html#41 US Airways badmouths legacy system
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sun, 15 Apr 2007 23:55:02 -0600
now there were some ... even after the detailed description of
information flow and how businesses actually operated with account
records and other stuff in the real business world ... were totally
unable to part with the digital certificate as comfortable analogy to
the physical world with credentials, certificates, licenses, etc.
.... this whole certificate-less based operation of digital signatures
and public keys just seemed alien
https://www.garlic.com/~lynn/subpubkey.html#certless
so we came up with two certificate-based scenarios ... where we were
able to avoid the enormous penalty of having the attached certificates
on every communication/transmission ... especially in the payment
transaction scenario ... where it represented a two-order magnitude
(factor of one hundred times) increase in both payload bloat and
processing bloat
https://www.garlic.com/~lynn/subpubkey.html#bloat
so the scenario is that the consumer ... registers their public key with the registration authority ... in this case, their financial institution. the financial institution validates the public key and then generates the digital certificate. the digital certificate is then recorded in the institutions accounts for business reasons.
1) caching
Now, normally at this point, a standard PKI process has the institution returning a copy of the digital certificate to the public key owner ... so that on all future communication that the public key owner has with the institution ... they can digitally sign it ... and then transmit the communication, the digital signature and the copy of the digital certificate back to the institution. However, normal caching algorithms allow that if it is know that the intended recepient already has a cached copy of something it is not necessary to repeatedly retransmit it.
In fact, normal, existing PKI implementations already allow relying parties such a strategy for digital certificates belonging to (intermediary) certification authorities (in a trust hierarchy), aka cache such digital certificates and avoid having to retransmit copies on every operation. This process just extends the caching strategies to all digital certificates ... and since the recipient of the communication is known to not only already have a copy of each digital certificate ... but in fact, actually has all the originals (stashed away in account records) ... as opposed to the copy provided to the public key owner ... it then is obvious that repeatedly transmitting the copy on every communication ... back to the entity that has the all the originals ... is both redundant and superfluous.
2) compression
the enormous payload and processing bloat of digital certificates in the financial sphere has been realized for some time. The X9F standards committee has even had a x9.68 work item for digital certificate compression related to financial transactions. The objective was to get digital certificate payload size down into the 300 byte range. One of the approaches was to recognize that a lot of the fields in digital certificates for financial institutions are the same (across all digital certificates). Part of the compression effort was looking at eliminating all fields that were identical across all digital certificates. We went even further, we claimed that it was possible to do information compression by eliminating all digital certificate fields that were known to already be in the possession of the financial institutions. We were then able to show, in fact all digital certificate fields were already in the possession of the financial institutions and therefor it was possible to reduce the digital certificate size to zero bytes. Then we would absolutely mandate that all digitally signed communication always required the appending of the (compressed) zero byte digital certificates.
....
various past posts going into this explanation on how to help reduce the
enormous payload and processing bloat contributed by normal digital
certificate processing (by either using caching techniques and/or
extremely aggresive compression techniques to create zero byte digital
certificates)
https://www.garlic.com/~lynn/aepay3.htm#aadsrel1 AADS related information
https://www.garlic.com/~lynn/aepay3.htm#aadsrel2 AADS related information ... summary
https://www.garlic.com/~lynn/aepay3.htm#x959discus X9.59 discussions at X9A & X9F
https://www.garlic.com/~lynn/aadsmore.htm#client4 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech6 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss1 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss6 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm4.htm#6 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
https://www.garlic.com/~lynn/aadsm5.htm#x959 X9.59 Electronic Payment Standard
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#spki2 Simple PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#softpki23 Software for PKI
https://www.garlic.com/~lynn/aepay10.htm#76 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay11.htm#68 Confusing Authentication and Identiification?
https://www.garlic.com/~lynn/aadsm12.htm#28 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#64 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aadsm13.htm#20 surrogate/agent addenda (long)
https://www.garlic.com/~lynn/aadsm14.htm#30 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aadsm14.htm#41 certificates & the alternative view
https://www.garlic.com/~lynn/aadsm15.htm#33 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm20.htm#11 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm22.htm#4 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm23.htm#51 Status of opportunistic encryption
https://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
https://www.garlic.com/~lynn/2000b.html#93 Question regarding authentication implementation
https://www.garlic.com/~lynn/2000e.html#41 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#3 Why trust root CAs ?
https://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#31 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001f.html#57 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#79 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001n.html#84 Buffer overflow
https://www.garlic.com/~lynn/2002j.html#9 "Clean" CISC (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2003f.html#32 Alpha performance, why?
https://www.garlic.com/~lynn/2004d.html#7 Digital Signature Standards
https://www.garlic.com/~lynn/2005b.html#31 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005n.html#33 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#31 Is symmetric key distribution equivalent to symmetric key generation?
https://www.garlic.com/~lynn/2005t.html#6 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh
https://www.garlic.com/~lynn/2006h.html#28 confidence in CA
https://www.garlic.com/~lynn/2006i.html#13 Multi-layered PKI implementation
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Mon, 16 Apr 2007 08:23:19 -0600jmfbahciv writes:
in the server/gateway (which we've periodically claimed is the original SOA) portion ... the merchant server has prior business relationship with the acquiring financial institution and must registered ... and similarly the acquiring financial institution must be registered with the merchant server. as a result there is stable registered information by both parties of the other party.
as oft repeated, the digital certificate design point is something of an anarchy with no-previous knownledge of either party of the other party (total strangers) ... trusted information distribution when there is no other mechanism (that either party has about obtaining information about the other party). the digital certificate design point is much lower quality than direct knowledge of relationship between both parties and/or established, registered information. ... and therefor the digital certificate design point has significantly greater uncertainties than direct, online, real-time information. However, again, this is the digital certificate design point ... it fills a gap in the offline world paradigm ... some information is better than the alternative ... none.
so this is recent post discussing the transition from pre-internet
online banking to the current infrastructure. the pre-internet online
(dialup) banking had much less uncertainty because there was stable
registered information about the banking entity being dealt with. The
transition to internet based online infrastructure was an enormous cost
savings ... but the digital certificate based approach introduced
enormous uncertainty (compared to the previous mechanism).
https://www.garlic.com/~lynn/aadsm26.htm#52 The One True Identity -- cracks being examined, filled, and rotted from the inside
https://www.garlic.com/~lynn/aadsm26.htm#53 The One True Identity -- cracks being examined, filled, and rotted from the inside
recent posts in this thread
https://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#26 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#27 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#28 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#31 sizeof() was: The Perfect Computer - 36 bits?
for other topic drift ... i've also in the past drawn analogy being the digital certificate paradigm for trusted distribution of information for an offline world ... and distributed database caching (or multiprocessor processor caching) algorithms where the issue of attributes of distributed information ... like stale, static, timely, consistent, etc has been studied much more thoroughly and is better understood (by comparison, there tends to be a whole lot of hand waving most of the time with regard to information theory characteristics of digital certificates).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Mon, 16 Apr 2007 10:02:15 -0600"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
for decades, similar comments have been leveled against the automobile industry ... i.e. new versions were on a 7-8 yr cycle ... and the annual model refresh consisted primarily of superficial cosmetic changes.
there has been recent TV advertisement somewhat poking fun at the characteristic ... by a new chewing gum product that positions itself has having much longer taste (when chewed). The marketing department points out if the taste lasts much longer ... people will be buying less.
For a time, I had an ongoing argument with the people responsible for some implementation feature that went into SVS/MVS ... but they insisted on doing it their way anyway. Some 6-7 yrs later, I got a call from somebody in POK mentioning that they had gotten a large corporate reward for coming up with a new method of how MVS kernel implemented the feature. He was wondering if he could make the same change in vm370 (for another large corporate award). Unfortunately, I had to point out to him that I had never, not done it that way ... going back to when I was an undergraduate implementing code in cp67 nearly 15yrs earlier. I then had some snide thot that instead of handing out large corporate award for fixing something that shouldn't have needed fixing ... they should have retro-actively penalized the compensation for the people responsible for the earlier faulty implementation.
related posts in this thread
https://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#30 sizeof() was: The Perfect Computer - 36 bits?
and for a little topic drift
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
https://www.garlic.com/~lynn/2007h.html#25 sizeof() was: The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: GA24-3639 Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 09:14:00 -0600Quadibloc <jsavard@ecn.ab.ca> writes:
the 2305 was fixed head disk ... the 2305-2 had about 11mbyte capacity
and avg. rotational delay of 5millisec and 1.5mbyte/sec transfer. the
2305-1 had about half the capacity and half the rotational delay and
twice the transfer rate. I never saw a 2305-1, but conjectured from the
specs ... that they took two heads (offset 180degrees) and operated them
in parallel on the same track ... a record then would have interleaved bytes
offset on opposite sides of the track. reference here
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html
recent discussion of 3mbyte channel feature for 370 ... before
data-streaming feature (relaxed end-to-end handshake on every byte
transferred) that both doubled channel cable length from 200ft to 400ft
and increased data transfer to 3mbyte (from 1.5mbyte) ... but somebody
mentioned that the 3838 array processor also attached to channel at
3mbyte/sec (and possibly only supported by the 165/168 external channel
box)
https://www.garlic.com/~lynn/2007e.html#40 FBA rant
https://www.garlic.com/~lynn/2007e.html#59 FBA rant
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 09:33:39 -0600jmfbahciv writes:
we were taking lots of arrows from the "SAA" and token-ring forces
... i.e. the SAA effort has somewhat been described as trying
to stuff the 2tier, client/server genie back into the bottle ...
attempting to hold the bulwarks for the terminal emulation
paradigm (and installed product base)
https://www.garlic.com/~lynn/subnetwork.html#emulation
and since we were pitching enet as much better than token-ring
... the token-ring crowd weren't very happy ... some recent posts:
https://www.garlic.com/~lynn/2007g.html#80 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
https://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market
https://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?
disclaimer ... my wife is co-inventor on early (both US and
international) token passing ring patent ... done back when she had been
con'ed into going to POK to be responsible for (mainframe)
loosely-coupled (i.e. cluster) architecture ... where she was
responsible for Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
... and except for IMS hot-standby, didn't see a lot of uptake until the relatively recent SYSPLEX offerings.
This was after she had co-authored peer-to-peer networking architecture AWP39 ... that was done in the same timeframe and somewhat competitive with SNA. One of the issues with peer-to-peer networking architecture was that it basically was a processor to processor infrastructure ... which saw relative little commercial customer market at the time ... while SNA was basically oriented around controlling large number of terminals ... and there were huge number of commercial accounts with large terminal (or other devices like ATM machines) populations (i.e. tens of thousands or more not infrequent).
The other author of AWP39, was the person (that much later)
forwarded collection of email about what the SNA forces were up to (after
we had been taken out of the NSFNET effort)
https://www.garlic.com/~lynn/2006w.html#email870109
in this post
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
recent post also mentioning AWP39
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 09:46:16 -0600jmfbahciv writes:
so after having done the consulting with the small client/server
startup that had this technology called SSL and wanted to handle
payment transactions on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
we spent some time in the x9a10 financial standard working group
coming up with the x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
in the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. as part of this we had to do detailed end-to-end vulnerability and threat analysis of a variety of different operational environments ... and perfect a solution that had countermeasures for all identified vulnerabilities and threats. By comparison, the other activities (from the period) tended to come up with simple "point" countermeasures ... frequently leaving lots of vulnerabilities unaddressed (and/or even creating new vulnerabilities).
a different, partial, interim solution that has been tried numerous by
different institutions over the past decade or so ... has been
one-time account numbers ... i.e. the consumer is provided with a list
of different account numbers ... that they can use with their
account. each account number is only good for one use ... and as they
use an account number, the consumer crosses it off the list. this
places a lot of the burden on the consumer ... but is a countermeasure
to the current skimming/evesdropping/harvesting vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#harvest
that then use the harvested information in replay attacks for fraudulent transactions.
misc. posts mentioning one-time (use) account numbers
https://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
https://www.garlic.com/~lynn/2003n.html#0 public key vs passwd authentication?
https://www.garlic.com/~lynn/2007c.html#6 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007c.html#15 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007g.html#19 T.J. Maxx data theft worse than first reported
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 10:30:36 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
part of what contributed to taking so long to get the x9.59 standard passed was frequently encountering the kneejerk response from some of those involved that appeared to be indoctrinated that digital certificates were equivalent to security.
as part of the stuff currently commoningly referred to as electronic commerce ... we had to work out the full end-to-end operation of applying digital certificates to normal business processes ... including all the backroom stuff where all the real work is done. it slowly began to dawn on us that the digital certificates were really targeted for the situation involving two parties that had absolutely no prior relationship (and/or have no recourse for directly contacting a trusted 3rd party with regard to the other party).
in the internat retail payment scenario ... in turns out that the majority of transactions between consumer and merchant is repeat business .... the "digital certificate" design point possibly only applies to maybe 5-15percent of actual transactions. furthermore, even in situation where the consumer/merchant are strangers ... for decades the business process actually has a timely transaction going between the merchant and the consumer's financial institution ... where there is prior relationship.
as more and more end-to-end business processes were worked out in detail, it became more and more apparent that digital certificates were redundant and superfluous ... except in the case involving two complete stangers ... where neither party has timely access to a 3rd party.
as a result, frequently getting thru the x9.59 standards process ... would require taking participants thru the actual end-to-end business processes, along with all the implications and operations (and various things that we had learned having done the work on the details of the existing implementation)
recent posts on the subject:
https://www.garlic.com/~lynn/2007h.html#28 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#31 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#32 sizeof() was: The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 12:14:22 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
as i've mentioned before ... the SNA crowd ... seemed to constantly view non-SNA netwroking operation as competitive with what they were doing ... even when it was peer-to-peer computer networking ... while they were focused on controlling large numbers of dumb terminal/devices.
Now, in the early part of this period ... there was enormous commercial requirement for dumb terminal/device operation ... most of the things out there had little or no local (microprocessor) smarts at the time ... so there was huge commercial requirement for such dumb terminal/device support ... that SNA filled.
By comparison ... in that period, there was significant less commercial requirement for direct processor-to-processor operation ... but the SNA forces would still view such offerings as competitive with what they were doing (at least from a corporate politics standpoint ... even if there was not overlap technically).
part of the reason that my wife didn't last long in POK (in charge of
loosely-coupled architecture) ... was ongoing battles with SNA. I've
mentioned before that the battle was (temporarily) resolved by
allowing her to define processor-to-processor peer-to-peer operation
... as long as it was confined to a datacenter machine room boundary
walls ... if it crossed the boundary walls of the datacenter machine
room ... then it became the responsibility of SNA. past
posts/references
https://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#9 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#16 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005u.html#23 Channel Distances
https://www.garlic.com/~lynn/2006o.html#38 hardware virtualization slower than software?
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
Part of the later transition was when everything started to acquire its own microprocessing smarts and could support full peer-to-peer operation.
i've often commented that the internal network was larger than the
arpanet/internet from just about the beginning until possibly mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet
and one of the reasons that the size of the internet exceeded the size
of the internal network was the proliferation of workstations and
personal computers as network nodes ... and it was in this period that
the SNA forces were in the middle of the battle to maintain
distributed personal computer operation via the dumb terminal
emulation paradigm
https://www.garlic.com/~lynn/subnetwork.html#emulation
while dumb terminal emulation paradigm contributed to the early uptake of personal computers ... i.e. a business that already had (financial) justification for tens (or hundreds) of thousands of (dumb) 3270 terminals ... could get a PC for about the same price as a 3270 terminal ... and in a single desktop footprint have it serve the function of both a 3270 terminal as well as provide some local processing (i.e. origin of the term desktop computing).
It was the increasing capability of such desktop computing where it could move into full peer-to-peer network (2tier ... and then the 3tier, middle ware/layer that we were out pitching) ... and out of the strictly dumb terminal operation that contributed to a lot of the heartburn in the SNA crowd.
The internal network technology was somewhat a thorn in their side
... but as long as pushing it as commercial product was kept under
control ... it was viewed more of an irritant than a real "threat".
Even our early involvement in the early NSF networking stuff was
viewed much more as an irritant than a real threat ... various past
posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
and old email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
it wasn't until the NSFNET backbone (i've claimed is the real
operational precursor to the modern internet) was going to become
highly visible that they took steps to preclude our doing the
implementation ... old email about large organizational meeting (which
got canceled and not allowed to happen)
https://www.garlic.com/~lynn/2005d.html#email860501
https://www.garlic.com/~lynn/2006u.html#email860505
and this post
https://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
and then references to having taken us out of the way ... then it was
(theoretically) possible for the SNA proponents to move it and supply
(SNA) solutions for NSFNET ... old email
https://www.garlic.com/~lynn/2006w.html#email870109
in this post
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 13:32:33 -0600Frank McCoy <mccoyf@millcomm.com> writes:
one of the things we ran into in the transition from 3272/3277 controller/terminal to 3274/327x controller/terminal ... was that they moved some amount of the ("simple") electronics in the terminal head back into the controller. Some of the electronic hacks that we had done to the 3277 terminal (to make it somewhat more human friendly) were no longer possible with 3274/327x combo ... this was separate from the issue that moving electronics back into the 3274 controller (besides appreciably reducing manufacturing costs) increased latency/processing of the operations (when we complained about all this ... the business response was that the terminal market was primarily data entry ... and not interactive computing ... possibly 3-4 orders of magnitude more terminals sold for data entry than interactive computing).
it was topez/3101 glass teletype coming out of japan that component and
manufacturing costs come down enuf (providing corresponding profit margin)
that you saw local microprocessing and possibility of local
programmability ... misc. past posts getting early topaz/3101 and
looking at burning new PROMs
https://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
https://www.garlic.com/~lynn/2007e.html#15 The Genealogy of the IBM PC
I've mentioned before about the SNA joke ... not being a system, not
being a network and not being an architecture. The first glimmer of
something approaching networking support was APPN (mid-80s, AWP164). At the
time, the person responsible for APPN and I reported to the same
executive. We used to rib him about stop wasting time trying to craft
networking into SNA and work on real (internet) networking ... since
they were never going to appreciate what he was doing. In fact, when
it came time to announce APPN, the SNA organization even non-concurred
... and it took 6-8 weeks of escalation to be able to get the APPN
product announcement out the door (which included careful rewording
the announcement "blue letter" to avoid implying that there was any
relationship at all between APPN and SNA). Other recent mention of
AWP164 (as well as AWP39)
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
https://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?
note that another aspect of SNA ... is that it can be viewed as still
trying to achieve the objectives set forth in (failed/canceled) future
system project
https://www.garlic.com/~lynn/submain.html#futuresys
a quote about future system objectives here:
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
with the pu4/pu5 (3705/vtam) interface.
the future system objectives, in turn trace to the clone controller
business ... and article blaming (at least in part) four of us
.... for having done a clone telecommunication controller as an
undergraduate in the 60s (started out using a interdata/3 with our own
channel adapter board and programmed to emulate 2702/2703)
https://www.garlic.com/~lynn/submain.html#360pcm
misc. past posts mentioning 3272/3274 controller issues
https://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#193 Back to the original mainframe model?
https://www.garlic.com/~lynn/2000c.html#63 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#66 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#67 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001k.html#30 3270 protocol
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2002k.html#2 IBM 327x terminals and controllers (was Re: Itanium2 power
https://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers (was Re: Itanium2 power
https://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003e.html#43 IBM 3174
https://www.garlic.com/~lynn/2003h.html#15 Mainframe Tape Drive Usage Metrics
https://www.garlic.com/~lynn/2003i.html#30 A Dark Day
https://www.garlic.com/~lynn/2003k.html#20 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003o.html#14 When nerds were nerds
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#28 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2006q.html#10 what's the difference between LF(Line Fee) and NL (New line) ?
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
misc. past posts mentioning APPN
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#43 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002g.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2003d.html#49 unix
https://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003o.html#55 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#2 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#9 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#20 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
https://www.garlic.com/~lynn/2007b.html#49 6400 impact printer
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 15:33:58 -0600Frank McCoy <mccoyf@millcomm.com> writes:
i got involved with trying to put out as a product some work that one of the RBOCS had implemented on series/1 .... that emulated pu4/3705 (with channel attach card) and pu5/sscp/vtam cross-domain ... with a peer-to-peer network implemetation ... and would tell/fake the pu5/vtam that the "resource" was owned by some other mainframe (when in fact, the "resource" was "owned" by the network).
it was actually a two-phase business plan ... putting out the initial implementation on series/1 ... while in parallel porting the implementation to rios (used for rs/6000).
this resulted in all sorts of corporate turmoil with the SNA
organization ... which would have included impacting numerous parts of
their revenue sources.
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
in the above posts are portions of a presentation that i gave at a quarterly SNA architecture review board (SNA/ARB) ... which was like rubbing their nose in it (after my talk, the executive running the ARB wanted to know who approved my being able to give a talk to them ... implying a desire wanting to head off any sort of repeat performance in the future).
... the possibility that we might do the NSFNET backbone w/o any
SNA content wasn't the only thing giving them heartburn
https://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings. Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 17 Apr 2007 16:44:05 -0600shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
lots of past posts mentioning various buffer overflow related problems
with c language implementations
https://www.garlic.com/~lynn/subintegrity.html#overflow
the pascal language implementation had originally been done by two people at the los gatos vlsi lab ... as part of a lot of work in tools supporting chip design. the compiler was eventually released as a product ... first as IUP and then as program product. the implementation was eventually also ported from the mainframe to (workstation) aix.
much later, as part of corporate strategy moving to (COTS) off-the-shelf tools ... some number of the tools/applications were ported to other vendor workstations and then turned over to external (chip tool) vendor.
in this exercise i was given the opportunity to port one such 60k line (vs/pascal based) application to another workstation platform. Unfortunately, the pascal implementation for that platform appeared to have never moved past the stage of being used for student educational purposes ... plus they had outsourced the implementation to an organization on the opposite of the globe (which really complicated resolving compiler and runtime issues).
total topic drift in this indirect reference using such tool skills
for redoing airline res ROUTES application
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
https://www.garlic.com/~lynn/2007g.html#41 US Airways badmouths legacy system
for totally other folklore ... one of the two original people responsible for pascal at the los gatos vlsi lab shows up later as vp of software development at MIPS and then (later still) general manager of the business unit that has responsibility for the original JAVA product.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Experts: Education key to U.S. competitiveness Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 21:45:54 -0600Experts: Education key to U.S. competitiveness
the above has probably been almost continuously repeated for the last 50yrs?
recent thread:
https://www.garlic.com/~lynn/2007g.html#6 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007g.html#35 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007g.html#68 U.S. Cedes Top Spot in Global IT Competitiveness
======
and some similar threads from last year:
https://www.garlic.com/~lynn/2006l.html#61 DEC's Hudson fab
https://www.garlic.com/~lynn/2006l.html#63 DEC's Hudson fab
https://www.garlic.com/~lynn/2006p.html#21 SAT Reading and Math Scores Show Decline
https://www.garlic.com/~lynn/2006p.html#23 SAT Reading and Math Scores Show Decline
https://www.garlic.com/~lynn/2006p.html#24 SAT Reading and Math Scores Show Decline
https://www.garlic.com/~lynn/2006p.html#25 SAT Reading and Math Scores Show Decline
https://www.garlic.com/~lynn/2006p.html#33 SAT Reading and Math Scores Show Decline
https://www.garlic.com/~lynn/2006q.html#6 SAT Reading and Math Scores Show Decline
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Tue, 17 Apr 2007 22:18:27 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
and recent item with another kind of one-time transaction code
The Clearing House Prepares for Consumer Use of Payment Codes
http://www.digitaltransactions.net/newsstory.cfm?newsid=1314
from above:
Electronic transactions using unique numerical identifiers to mask
account and routing data are rising fast, and now the company behind the
technology expects it will be commercially available for consumer
payments in about a year. The Clearing House Payments Co. LLC says
corporate users made 80,459 transactions in 2006 using its 5-year-old
Universal Payment Identification Code, up from 9,696 in 2005. UPIC
payments totaled $4.7 billion, up from $571 million.
... snip ...
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 10:14:00 -0600Andrew Swallow <am.swallow@btopenworld.com> writes:
high-speed networking in the 70s and 80s ... was constantly running into bottleneck processing in hosts ... and looking at out moving the processing outboard into external boxes and hardware. fixed-length fields would simplify processing in outboard boxes. also for the 70s and into the 80s ... memory was expensive ... so if you had offload processing in outboard boxes ... it was advantage to be able to pipeline processing of the data moving thru ... w/o needing a lot of local, intermediate storage to stage information.
simple examples are the purely hardware implementations like ethernet and ATM (not the cash machine atm).
in the late 80s, we were on the XTP technical advisery board ... looking at fielding a high-speed protocol ... and overcoming a lot of the processing inefficiencies related to ip and tcp ... target paradigm was pipelined offload hardware processing.
misc. past posts about XTP ... and HSP ... trying to get high-speed
protocol work item in x3s3.3 (ISO chartered us network standards body)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
as i've commented before ... one of the issues in x3s3.3 was that ISO had requirement that no standards work be done on stuff that violated OSI. XTP/HSP violated OSI in several areas (and therefor was rejected)
1) XTP/HSP supported LAN/MAC interface. LAN/MAC sites in the middle of the OSI network layer (rather than at interface/boundary) ... LAN/MAC violated OSI ... and therefor anything that supported LAN/MAC violated OSI
2) XTP/HSP went directly from transport to LAN/MAC ... bypassing transport/network interface. bypassing transport/network interface violated OSI.
3) XTP/HSP supported internetworking. internetworking (IP) sits between the bottom of transport and the top of network ... which doesn't exist in OSI. support for internetworking (that doesn't exist in OSI) violated OSI.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 10:35:18 -0600Frank McCoy <mccoyf@millcomm.com> writes:
a passing past reference to 186 (post is mainly misc. old email from
'82)
https://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer"
somebody in the thread last yr must have had a URL reference to detailed timeline.
of course, wiki can be your friend
https://en.wikipedia.org/wiki/Intel_8086
https://en.wikipedia.org/wiki/Intel_8088
https://en.wikipedia.org/wiki/80188
https://en.wikipedia.org/wiki/80186
https://en.wikipedia.org/wiki/80286
which has pointers to things like:
http://www.intel.com/design/intarch/intel186/
https://en.wikipedia.org/wiki/NEC_V20
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 10:59:21 -0600Frank McCoy <mccoyf@millcomm.com> writes:
they hung in as somewhat embedded chips in other form factors. the point-of-sale cardswipe terminals for a couple decades were "PC/XT" ... radically different form factor and solid state memory in lieu of hard disk.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 12:32:10 -0600krw <krw@att.bizzzz> writes:
same past posts with references
https://www.garlic.com/~lynn/2003g.html#61 IBM zSeries in HPC
https://www.garlic.com/~lynn/2004b.html#1 The BASIC Variations
https://www.garlic.com/~lynn/2005q.html#33 Intel strikes back with a parallel x86 design
lets see if I have some old posts with prices of the era (although i
think these are yr or two later)
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#81 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#82 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
and oblique recent reference
https://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market
https://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Securing financial transactions a high priority for 2007 Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 12:57:53 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
and more recent article from yesterday:
Banks must come clean on ID theft
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2007/04/17/EDGEBOS87H1.DTL
from above:
Two separate studies recently reached conflicting conclusions: While one
found that identity theft is on the rise significantly, the other
reported that it is on the decline.
So which is it?
... snip ...
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 19:59:37 -0600krw <krw@att.bizzzz> writes:
several of the snips have references to articles comparing the various processors, announce dates, transistor counts,etc ... and other articles from the 80s
this specific one has summary of pieces from a longer article (in addition
to 86 stuff ... also 68k stuff)
https://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
little use of search engine turns up this slightly more detailed (& more recent article 2006 instead of 1989)
Microprocessor Types and Specifications
http://www.quepublishing.com/articles/article.asp?p=481859&seqNum=14&rl=1
the description of 386sx gives it as somewhat similar rationship to 386dx as the 8088 to 8086 ... internally it was 386dx ... but external interfaces/buses was compatible with 286 (allowing 386sx to be put into systems that had been designed for 286).
and the above reference for 486 chapter
http://www.quepublishing.com/articles/article.asp?p=481859&seqNum=15&rl=1
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Wed, 18 Apr 2007 23:28:59 -0600Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
one of the refs in previous post
http://www.quepublishing.com/articles/article.asp?p=481859&seqNum=14&rl=1
discusses 80387 coprocessor ... and mentions that 80287 coprocessor was merely 8087 with different pins ... and ... "because intel lagged in developing 387 coprocessor, some early 386 systems were designed with socket for 287" (aka 8087 with different pins)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Securing financial transactions a high priority for 2007 Newsgroups: alt.folklore.computers Date: Thu, 19 Apr 2007 07:45:20 -0600jmfbahciv writes:
a blog here commenting on the article (combined with comments in other ongoing threads):
On cleaning up the security mess: escaping the self-perpetuating trap of
Fraud?
https://financialcryptography.com/mt/archives/000895.html
and long-winded collection of posts on the subject of Naked Transaction
Metaphor
https://www.garlic.com/~lynn/subintegrity.html#payments
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Thu, 19 Apr 2007 09:38:47 -0600jmfbahciv writes:
the 360/30 could run in 1401 hardware emulation mode ... but there was desire to get experience transitioning to 360.
original MPIO did either card->tape or tape->printer/punch.
i got to design my own monitor, interrupt handlers, device drivers, storage manager, etc.
i got my implementation to the point that it could handle two parallel tape streams ... one doing card->tape and the other tape->printer/punch ... and allocate available memory for I/O buffers. tape->printer/punch would spin at full tape speed until the buffers filled ... and then slow-down to printer speed (1401N1, 1100 lines/minute). of course these were only 7track, 200bpi ... that were supported by 709.
4341 and vax/780 were relatively comparable processing power ... and 360/30 was somewhere 1-5 percent that of 4341.
although the table here
http://www.jcmit.com/cpu-performance.htm
uses 780 performance as "1" (VUP?) ... and lists 4341 as .6 and 360/30 0.01165 (just slightly over one percent the processing power of 780)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Thu, 19 Apr 2007 10:37:54 -0600Charlton Wilbur <cwilbur@chromatico.net> writes:
we ran into (at least one) operation ... located in a metropolitan area in a large skyscraper ... in each overnight operation, it cleared more money than the lease on the whole skyscraper for a year plus the aggregate of a years compensation for everybody that worked in the building.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Thu, 19 Apr 2007 11:24:24 -0600Andrew Swallow <am.swallow@btopenworld.com> writes:
as i've referenced in the past ... cp67/vm370 wouldn't have been considered less or more "real" than the os/360 varieties ... it was just in the period that the customers installed an order of magnitude or more of the batch variety.
part of the heritage of commercial batch infrastructure was that a large amount of the commercial dataprocessing workload was on behalf of the business ... not doing tasks on behalf of an individual. this was nuts and bolts of running a commercial enterprise ... not related to any individual ... but on behalf of the business. thinks like re-occurring payroll being run w/o regard to any individual or group of individuals.
some of the webservers have been slowely somewhat growing (up) into such environments ... i.e. the webserver runs 24x7 ... whether or not people are present or not present. the web server application is somewhat analogous to the "batch" applications supporting large numbers of ATM (as in teller, not transfer) that have been around since at least the 70s.
once computing price/performance dropped below some threshold ... it could find more and more use in less profitable areas ... like email.
for other drift, past posts about rewriting i/o supervisor in the late
70s, so it could be used in disk engineering/development and product
test labs (since the standard "batch" mvs operating system had MTBF
of 15 minutes in that environment)
https://www.garlic.com/~lynn/subtopic.html#disk
recent post with references to cp67 & vm370 being just as real as os/360
varieties ... it was just that up into the early 80s ... that business
oriented (as opposed to individual/personal) applications dominated the
market place i.e. customer batch installations far exceeded customer
cp67/vm370 installations, which far exceeded internal cp67/vm370
installation, which far exceeded the max. number of internal cp67/vm370
that I directly supported at one point, which was about the same number
of the total multics installations in its whole lifetime
https://www.garlic.com/~lynn/2007g.html#75 The Perfect Computer - 36 bits?
this is somewhat analogous to recent post discussing various aspects
of the transition from desktop terminals to desktop computing
https://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Thu, 19 Apr 2007 16:09:02 -0600l_cole writes:
which somewhat precipitated this new item
We pluck the lemons; you get the plums: the Lemon Maligned, in Wikipedia
as in the security literature
https://financialcryptography.com/mt/archives/000896.html
which makes reference to this nobel prize work
Markets with Asymmetric Information
http://nobelprize.org/nobel_prizes/economics/laureates/2001/public.html
which then allows for topic drift and this hot-of-the-press URL
'Freakonomics' writer talks monkey business
http://news.com.com/Freakonomics+writer+talks+monkey+business/2100-1026_3-6177655.html
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: T.J. Maxx data theft worse than first reported Newsgroups: bit.listserv.ibm-main Date: Thu, 19 Apr 2007 17:25:22 -0600Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
one of the scenarios where this would result in problems is where the merchant had an online webstore as well as lots of brick&mortor. software in typical e-commerce will usually emulate transaction in a traditional POS terminal ... and the merchant would drive all their transactions thru their single concentrator.
at issue is that the interchange fee tends to be quite a bit different for webservers ... and much of the fee determination/billing is driven off merchant and/or location code. having everything coming in thru a single interface has resulted in situations where the web transactions were obfuscated.
old post about security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61
and related observation that attackers may be able to outspend
defender by as much as 100:1
https://www.garlic.com/~lynn/2007e.html#26 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007g.html#20 T.J. Maxx data theft worse than first reported
for a little topic drift ... past posts discussing the naked
transaction metaphor
https://www.garlic.com/~lynn/subintegrity.html#payments
lots of past posts on evedropping, skimming, harvesting, etc
that can be used for replay attacks
https://www.garlic.com/~lynn/subintegrity.html#harvest
and numerous posts discussing man-in-the-middle attacks
(as opposed to simple evesdropping and replay attacks)
https://www.garlic.com/~lynn/subintegrity.html#mitm
and posts on general subject of fraud, vulnerabilities, threats,
exploits and risks
https://www.garlic.com/~lynn/subintegrity.html#fraud
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Thu, 19 Apr 2007 21:35:42 -0600Morten Reistad <first@last.name> writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: T.J. Maxx data theft worse than first reported Newsgroups: bit.listserv.ibm-main Date: Fri, 20 Apr 2007 08:20:38 -0600Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
... don't think individual POS terminals sitting on the counter ... think corporate POS concentrator ... where all POS transactions for the whole corporation passes thru on the way to the financial network.
this is slightly analogous to the internet payment gateway (we periodically claim is the original SOA)
long ago, and far away, we were called in to consult with this small
client/server startup that had this technology called SSL and wanted
to do payment transactions on their server.
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
a "payment gateway" was developed and deployed ... lots of past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
it is somewhat analogous to a corporate POS concentrator ... but can be used by lots of different (small) webservers any place on the web (as opposed to webservers in large corporation that frequently just aggregate into a corporate POS concentrator).
as before ... there are all kinds of evesdropping technology (some may
or may not require some sort of physical operation) ... and then use
the harvested information for fraudulent transactions in various kinds
of replay attacks (being able to use information harvested from
previous transactions ... in new fraudulent transactions)
https://www.garlic.com/~lynn/subintegrity.html#harvest
as an aside ... it isn't too unusual to see such trucks parked all over the place around silicon valley ... they are brought in for regular audits for leaking/stray emissions. they typically don't bother to disguise external antennas
for some topic drift ... posts about trade secret litigation and some
question about whether the security was proportional to the risk
(i.e. had to demonstrate security procedures that were proportional to
the claimed value of the stuff at risk):
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2003i.html#62 Wireless security
https://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
https://www.garlic.com/~lynn/2006r.html#29 Intel abandons USEnet news
https://www.garlic.com/~lynn/2007e.html#9 The Genealogy of the IBM PC
https://www.garlic.com/~lynn/2007f.html#45 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007f.html#46 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007f.html#57 Is computer history taught now?
part of the web case ... was that the existing infrastructure is extremely vulnerable to replay attacks.
from security acronym PAIN
P - privacy (sometimes CAIN, confidential) A - authentication I - integrity N - non-repudiation
in the case of the payment gateway, SSL was used for privacy/confidentiality of the transaction transmission thru the internet ... i.e. achieving "security" with encryption as countermeasure to evesdropping (as part of replay attacks). However, as we've frequently noted was that the most of the harvesting exploits appear to happen at the end-points ... as opposed to while the transaction is actually being transmitted.
now, in the mid-90s, the x9a10 financial standard working had been
given the requirement to preserve the integrity of the financial
infrastructure for ALL retail payments. the result was x9.59
financial transaction standard
https://www.garlic.com/~lynn/x959.html#x959
in effect, the x9.59 financial standard substituted end-to-end "authentication" and "integrity" (for privacy, confidentiality, encryption) to achieve "security". providing end-to-end "authentication" and "integrity" eliminated evesdropping as a risk or compromise ... since information from existing transactions could no longer be used for fraudulent transactions in replay attacks i.e. x9.59 transactions aren't vulnerable to evesdropping, skimming, harvesting exploits ... whether "at-rest" or "in-transit".
we've claimed that the largest use of SSL has been the e-commerce stuff that we previously worked on ... as part of hiding transactions during transmission. x9.59 eliminates the requirement for hiding transactions (and therefor eliminates one of the major uses for SSL).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Fri, 20 Apr 2007 10:10:51 -0600greymaus writes:
for one of the trips ... we had to do walk thru of new fab in dresden
(including all the bunny suit stuff) ... somewhat in support of
https://www.garlic.com/~lynn/aadssummary.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings. Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 20 Apr 2007 10:50:50 -0600JimKeo <jimkeo@multi-platforms.com> writes:
the initial implementation and would get about 44kbyte/sec consuming most of a 3090 processor. i then added the support for rfc1044 and some tuning tests at cray research was getting channel speed (1mbyte/sec) between 4341 clone and a cray machine ... using only a modest amount of the 4341 processor ... i.e. about 25 times the aggregate thruput for about 1/20 the pathlength ... about 400-500 times difference in bytes/transferred per instruction executed.
re:
https://www.garlic.com/~lynn/2007h.html#8 whiny question: Why won't z/OS support the HMC 3270 emulator
misc. posts mentioning various compromises, vulnerabilities, exploits,
etc related C language
https://www.garlic.com/~lynn/subintegrity.html#overflow
and misc. past posts mentioning having done rfc1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
for other topic drift ... we had an internal high-speed backbone
... part of our hsdt (high-speed data transport project)
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and we working with various organizations and NSF for applying it
to NSFNET related operations ... various old email from the period
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
in various posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings. Newsgroups: bit.listserv.ibm-main,alt.folklore.computers To: <ibm-main@bama.ua.edu> Date: Fri, 20 Apr 2007 12:15:35 -0600re:
besides vs/pascal and lot of chip design applications, los gatos vlsi
lab had also done the LSM ... original name was los gatos state machine,
but change to logic simulation machine for some external publications
... it want chip logic simulation at something like 50,000 times that of
software application running on 3033. it was somewhat original in that
it could take into account time (allowed for handling asynchronous clock
chips as well as digital chips with analog circuits). The later
machines, like EVE (endicott verification engine), assumed chips with
synchronous clock. recent post mentioning LSM (with several LSM, YSE,
and EVE references):
https://www.garlic.com/~lynn/2007f.html#73 Is computer history taught now?
one of the HSDT high-speed links
https://www.garlic.com/~lynn/subnetwork.html#hsdt
was between austin and los gatos ... and there was fair amount of chip design traffic over the link from austin to los gatos; in fact it was claimed that the availability helped bring in the RIOS (i.e. rs/6000) chipset a year early.
The Los Gatos lab also did a high-performance eperimental database
in conjunction with some people from STL ... somewhat concurrent with
system/r ... original sql/relational implementation
https://www.garlic.com/~lynn/submain.html#systemr
it shared some of the characteristics of relational ... but while the system/r implementation assumed fairly regular information organization implemented in tables ... the los gatos implementation (also originally done in vs/pascal) was targeted at chip design ... both logical and physical layout ... with possibly extremely anomoulous and non-uniform data (not well suited for table structure).
i had worked on some of the system/r stuff ... recent post
https://www.garlic.com/~lynn/2007.html#1 The Elements of Programming Style
with some old email
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
as well worked on some of the implementation that los gatos was doing.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Fri, 20 Apr 2007 15:52:51 -0600"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
HONE started out with clone of the (cambridge) science center's cp67
system and cms\apl (science center had ported apl\360 to cms).
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE migrated to vm370 and apl\cms ... apl\cms work done at the palo alto science center, which included the apl microcode assist for 370/145 (was able to get about the same thruput out of 145 as apl on 168 w/o microcode assist ... i.e. about 10:1 improvement in pure processing).
However, the issue for HONE was that a lot of their applications were also i/o and memory hungry ... which met they actually had to have 370/168 (or better) machines (w/o apl m'code assist).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: T.J. Maxx data theft worse than first reported Newsgroups: bit.listserv.ibm-main Date: Fri, 20 Apr 2007 16:28:14 -0600Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
and for more topic drift, latest news , hot of the press today
Laptops And Flat Panels Now Vulnerable to Van Eck Methods
http://hardware.slashdot.org/hardware/07/04/20/2048258.shtml
Seeing through walls
http://www.newscientist.com/blog/technology/2007/04/seeing-through-walls.html
from above:
Back in 1985, Wim Van Eck proved it was possible to tune into the radio
emissions produced by electromagentic coils in a CRT display and then
reconstruct the image. The practice became known as Van Eck Phreaking,
and NATO spent a fortune making its systems invulnerable to it. It was a
major part of Neal Stephenson's novel Cryptonomicon.
... snip ...
so as previously noted, there are several countermeasure to evesdropping
and replay attacks ... 1) make sure the attacker can't get the
information, 2) scramble/encrypt, so the information is unintelligible,
3) change the paradigm (ala x9.59) so the evesdropped/harvested
information is useless for replay attacks.
https://www.garlic.com/~lynn/x959.html#x959
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 03:07:12 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
slight drift:
Ballmer: Citigroup to upgrade 500,000 PCs to Vista in next year
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9017204
now 25yrs ago, for a lot of these commercial entities, it would be the case of upgrading from desktop terminals to desktop pcs/computing.
although back then, this particular institution also had an issue
with variable rate mortgages ... long winded post mentioning a variety
of different subjects ...including variable rate mortgages
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
collected posts mentioning some aspect of terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ANN: Microsoft goes Open Source Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 03:19:23 -0600"Robert" <sabu77@comcast.net> writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: sizeof() was: The Perfect Computer - 36 bits? Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 04:42:53 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
oh, almost forgot ... while t/r might have not been optimal LAN ...
even 16mbit compared to 10mbit enet ... recent reference:
https://www.garlic.com/~lynn/2007g.html#80 IBM to the PCM market
20yrs ago it could be used to address slightly different aspect.
20yrs ago, there were starting to be bldg. flr loading problems with the 3270 coax cable trays (long runs of 3270 coax from every office back to the bldg datacenter and stressing the bldg. flr loading limits). the other problem was the fire inspectors were finding that a lot of the 3270x coax insulation was flammable and the coax cable trays (with large bundles of flammable material) permeated bldgs. they were mandating that all that coax cable had to be replaced with cable that had non-flammable insulation.
so along comes t/r ... the customer could replace their desktop PC 3270 emulation cards with t/r cards ... and still run their dumb terminal emulation. the CAT4 cable was significantly lighter than 3270 coax ... and CAT4 cable runs tended to be significantly shorter distance ... to a local t/r MAU ... rather than all the way back to the bldg. datacenter. All those heavy 3720 coax cable trays permeating the bldg with massive amounts of flammable material could just disappear.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SSL vs. SSL over tcp/ip Newsgroups: comp.security.misc Date: Sat, 21 Apr 2007 10:20:56 -0600"xpyttl" <xpyttl_NOSPAM@earthling.net> writes:
we did have approval/sign-off on the webserver to payment gateway
part of the implementation ... misc. past posts about payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
and had to specify some additional operations for SSL ... like mutual authentication ... which didn't exist at the time. both http and https was implemented running over TCP ... supposedly a "reliable" protocol.
in part because we had done ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
we had setup the payment gateway as a continuous operation with multiple different physical connections into multiple different carefully selected ISPs ... that were at different places in the internet backbone (and also have since claimed it was the original SOA).
in the midst of doing all this ... the internet governance transitioned from anarchy routing (i.e. ROUTED and such advertisements) to hierarchical routing (part of the problem was anarchy routing was exceeded available memory space in backbone routers ... and transitioning to hierarchical routing saved significant memory and processing). As a result, it was no longer possible to dynamically advertise routes as countermeasure to various failures and/or partitioning in the internet (or even ISP taking hardware down on sundays for maintenance). So the only alternative left was (domain name) multiple A-record support and the "client" (setting up the connection) running thru all the advertised A-records until it found one that got thru.
At the time, we got several howls from individuals claiming that using straight-forward TCP connects provided for "reliable transport" ... but that was only true once the connection was made. It there wasn't a specific path that was up-and-running ... it was still possible for the initial TCP connection to fail (we came up with the observation that if it wasn't in the steven's tcp/ip book used in undergraduate course, they didn't know it existed). Once the TCP connection was made to the appropriate port ... then http/https could start their part of the process.
The explanation of multiple A-records then was met with response that it was "way too advanced" (even when presented with telnet example code from 4.3 Tahoe) ... however, since we had sign-off/approval for the server to payment gateway implementation ... it had to be implemented. However, it took another year to get the client/browser to implement it for the client to server operation.
An example was an early, large web merchant that would advertise products in commercials that ran in sunday afternoon football. However, one of their ISPs had standard operations that they took routers down on sunday for maintenance. As a result ... all browsers that were routing in through that ISP path would be unable to connect during half-time (which was prime activity for them) ... even though there were alternate paths that were available for connection.
This was in the period that ipsec was attempting to take over the world with end-to-end (network) encryption. I've claimed that both SSL and VPN got a lot of uptake in that period because of the difficulty that ipsec was having in getting new kernel space tcp/ip stack code deployed. About the same time that SSL was starting to be used ... a friend of ours that we had worked with since late 70s ... introduced (router/gateway based) VPN in the gateway committee meeting at the IETF san jose meeting. My observation was that it ruffled some feathers in the ipsec operation ... until they came up with the label "lightweight ipsec" for VPNs (which met that everybody else could call what they were doing "heavyweight ipsec").
A corporation with hundreds/thousands of machines containing their own kernels and tcp/ip protocol stack ... didn't have to be updated. Individuals could just load a new browser application ... and voila ... all of a sudden they had "end-to-end" (application layer) encryption (it was similarly simple for end-users/consumers with their own home machine ... where the kernel had been preloaded by the PC vendor).
from our rfc index
https://www.garlic.com/~lynn/rfcietff.htm
click on Term (term->RFC#) in RFCs listed by section. Then click on
"TLS" in the Acronym fastpath section ... i.e.
transport layer security (TLS )
see also encryption , security
4785 4762 4681 4680 4642 4572 4513 4507 4492 4366 4347 4346 4279 4261
4217 4162 4132 3943 3788 3749 3734 3546 3436 3268 3207 2847 2830 2818
2817 2716 2712 2595 2487 2246
clicking on the RFC number brings up the RFC summary in the lower
frame; clicking on the ".txt=nnn" field in the RFC summary retrieves
the actual RFC.
similarly:
transmission control protocol (TCP )
see also connection network protocol
4828 4808 4782 4727 4654 4653 4614 4413 4404 4342 4341 4278 4163 4145
4138 4022 4015 3822 3821 3782 3742 3734 3708 3649 3562 3522 3517 3481
3465 3449 3448 3430 3390 3360 3293 3081 3042 2988 2923 2883 2873 2861
2760 2582 2581 2525 2488 2452 2416 2415 2414 2398 2385 2151 2147 2140
2126 2018 2012 2001 1859 1792 1791 1739 1693 1644 1613 1470 1379 1347
1337 1323 1273 1263 1213 1195 1185 1180 1158 1156 1155 1147 1146 1145
1144 1110 1106 1095 1086 1085 1078 1072 1066 1065 1025 1006 1002 1001
983 964 962 896 889 879 872 848 846 845 843 842 839 838 837 836 835 834
833 832 817 816 814 813 801 794 793 773 761 721 700 675
for other topic drift ... while tcp/ip was the technology basis
for the modern internet ... we claim that the NSFNET backbone was
the operational basis for the modern internet ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 16:33:54 -0600krw <krw@att.bizzzz> writes:
as tolerances became more & more refined ... the mating of the drive and arm mechanism to the 3340-like enclosed infrastructure became less practical ... instead it became all one unit w/o problems related to providing decoupling infrastructure.
picture of string of 3340 drives with 3340 removeable pack (which
is completely enclosed, closed infrastructure:
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3340.html
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3340b.html
it sort of reminds me of analogy with Boyd's comment about F111 moveable
wing ... the mechanical infrastructure & weight to support the moveable
wing cost more in performance and operation ... then the advantages
gained from having a moveable wing (it is part of the reason that you
don't see moveable wing from his work on f15, f16, f18, etc). f111
reference:
http://www.airpower.maxwell.af.mil/airchronicles/aureview/1972/nov-dec/holder.html
misc. posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
misc. URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 17:43:48 -0600krw <krw@att.bizzzz> writes:
the reference (in the above)
http://www.airpower.maxwell.af.mil/airchronicles/aureview/1972/nov-dec/holder.html
has some reference to designs being able to be improved by eliminating swing-wing (i.e. it doesn't mean that a swing-wing is bad ... it just is that a swing-wing can "cost" more than it benefits).
i.e. F14 was designed before Boyd's E-M theory of maneuverability (B-52s were also ... and there are still have some of them flying).
boyd ran light weight fighter (LFW) at the pentagon ... doing work on
f15, f16, f18 ... here is article on F18 ... which was to be F14
follow-on.
https://en.wikipedia.org/wiki/F/A-18_Hornet
Boyd had lots of fights taking huge amount of weight out of the F15 and improving its performance ... and also doing the F16 design. The above article makes some reference to it
wiki page for LWF (also mentions Boyd and Boyd's E-M theory
of maneuverability)
https://en.wikipedia.org/wiki/Light_Weight_Fighter
above page talks about much of the LWF was based on Boyd's work ... but it doesn't actually mention that Boyd ran the LWF office in the pentagon for a period.
search engine turns up this comparison of F14 and F18
http://www.geocities.com/CapeCanaveral/8629/showdown.htm
the contrast between F14 and F18 in the above ... is similar to some of the arguments about F15 vis-a-vis F16 ... the F15 can carry heavier (missile) payload and therefor has more capability to fight at a distant (acting more like a missile platform than fighter).
and of course ... collected post mentioning Boyd:
https://www.garlic.com/~lynn/subboyd.html#boyd
misc. URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 19:48:28 -0600Bernd Felsche <bernie@innovative.iinet.net.au> writes:
while the referenced swing-wing article is from 72
http://www.airpower.maxwell.af.mil/airchronicles/aureview/1972/nov-dec/holder.html
... it includes references to
the Messerschmitt P-1101
the Bell X-5
the Grumman XF10F
the Convair F-111
the B-1 strategic bomber (North American Rockwell)
the Grumman F-14 Tomcat
the Mirage G8 and the Panavia 200
Boeing's initial SST
swingwing in space -- the Lockheed FDL-5
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 20:05:37 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
a couple old posts retelling one of Boyd's folklore tales from when he was
running LWF office in the pentagon
https://www.garlic.com/~lynn/2004b.html#13 The BASIC Variations
https://www.garlic.com/~lynn/2005n.html#14 Why? (Was: US Military Dead during Iraq War
lots of past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
and various URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 21:05:05 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
and recent F18 item from today
Pilot Killed in S.C. Blue Angel Crash
http://apnews.myway.com/article/20070421/D8OLA5GO0.html
and Blue Angels FAQ ... with other comments about F18
vis-a-vis F14
http://www.blueangels.com/faq.shtml
and
http://www.navy.com/about/navylife/onduty/blueangels/faq/
and F14 details:
http://www.fas.org/man/dod-101/sys/ac/f-14.htm
and F18
http://www.fas.org/man/dod-101/sys/ac/f-18.htm
and with reference to comment about B-52s still flying (as opposed to F-14s):
Navy retires F-14, the coolest of cold warriors
http://www.usatoday.com/news/nation/2006-09-22-F14-tomcat_x.htm
and B52
http://www.centennialofflight.gov/essay/Air_Power/B52/AP37.htm
fom above:
An engineering study in the year 2001 predicted that the B-52 would be
flying for the air force into the year 2045, almost a century after its
development began. It has outlived not only its predecessors but also
many of its successors such as the Convair B-58, Rockwell B-70 and B-1A,
and perhaps even the B-1B. A USAF general called it a plane that is "not
getting older, just getting better." Of the 744 B-52s built, fewer than
100 remain in service, all H-models. The Boeing engineers had built a
plane that was strong enough to last and basic enough to be adaptable to
the changing technology of air war.
... snip ...
B-52 STRATOFORTRESS
http://www.af.mil/factsheets/factsheet.asp?fsID=83
Atlantic Strike V begins in Avon Park (4/18/2007)
http://www.af.mil/news/story.asp?storyID=123049204
from above:
Joint air assets participating in the training include F-16 Fighting
Falcons, A-10 Thunderbolt IIs, B-52 Stratofortress, E-8 Joint STARS,
Navy F/A-18 Hornets, E-2 Hawkeyes, and P-3C Orions. Coalition observers
include military members from the United Kingdom, Germany and the
Netherlands.
... snip ...
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 21:19:17 -0600krw <krw@att.bizzzz> writes:
while Boyd vastly improved the F-15 ... the F15 didn't originate from his design principles.
so a little more detail from here
USAF Col John Boyd
http://www.sci.fi/~fta/JohnBoyd.htm
from above ...
In the mean time the U.S. media had focused in the huge price tag that
went with the F-15 and the poor performance of the F-14 Tomcat. The
Nixon government urged Secretary of Defense Melvin Laird to put the
military procurement system on track. Laird gave the mission to his
assistant David Packard, who approved the lightweight fighter project.
... snip ... and ...
Lightweight fighter studies showed that the aircraft would have better
performance than the F-15 Eagle, but this information had to be kept
secret because the USAF didn't want even the prototype to be better
than the F-15.
... snip ...
Boyd's stories of what went on are more colorful.
misc. posts mentioning Boyd:
https://www.garlic.com/~lynn/subboyd.html#boyd
and various Boyd URLs from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sat, 21 Apr 2007 21:48:38 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
so even more from:
http://www.sci.fi/~fta/JohnBoyd.htm
from above:
In late 1962 Boyd met the General Dynamics project engineer for the
F-111, Harry Hillaker, at the Eglin O'Club. Boyd complained to Hillaker
that the F-111 was underpowered and the swing-wing mechanism was too
complicated to be used fast enough to sweep the wings during flight and
would get fatigue and stress cracks. Boyd had already done some E-M
calculations on the F-111 and knew that the Air Force was about to make
a mistake if it procured the F-111. Swing-wing technology would
ultimately ruin two generations of airplanes: the Navy's underpowered
F-14 and the Air Force's B-1 bomber. Boyd and Hillaker agreed that they
would like to develop a small maneuverable fighter.
... snip ...
a Boyd reference from earlier this year
https://www.garlic.com/~lynn/2007.html#20 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
with a reference to a Boyd quote here
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/
from above:
"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To be
or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997
From the dedication of Boyd Hall, United States Air Force Weapons
School, Nellis Air Force Base, Nevada. 17 September 1999
... snip ...
past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
various URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sun, 22 Apr 2007 08:05:02 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
so to slightly return to a computer bent ... they eventually realized that Boyd was working on the design of what was to become the F16 ... and tried all sort of means to stop him. One was that they realized that he had to be using significant amounts of (gov.) supercomputer time for the design (programs written in Fortran) ... and they were going to find records which would allow him to be charged with theft of millions in gov. resources, put in Leavenworth and throw away the key. There were extensive audit of all gov. supercomputers ... but they never found any records of Boyd's use.
past posting of story about trying to get Boyd thrown into
Leavenworth
https://www.garlic.com/~lynn/99.html#120 atomic History
https://www.garlic.com/~lynn/2005t.html#13 Dangerous Hardware
past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
various URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Sun, 22 Apr 2007 08:31:25 -0600jmfbahciv writes:
the other is that there was study in the early 80s, attributed to Jim when he was at Tandem ... showing that hardware was becoming dwiddling percentage of the cause of service outages.
IMS hot-standby was configured with replicated locations at geographicly separated datacenters.
of course, past posts mentioning my wife being con'ed into going to POK
to be in charge of loosely-coupled architecture (i.e. mainframe for
cluster) ... and being responsible for Peer-Coupled Shared Data
architecture ... and that there was very little uptake at the time
... except for the work on IMS hot-standby (until sysplex work some
decades later)
https://www.garlic.com/~lynn/submain.html#shareddata
of course later we did ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
where we coined the terms disaster survivability and geographic
survivability (to differentiate from disaster recovery)
https://www.garlic.com/~lynn/submain.html#available
for other drift ... other recent posts mentioning Jim:
https://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007.html#13 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007d.html#4 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#6 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#8 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#33 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007e.html#4 The Genealogy of the IBM PC
https://www.garlic.com/~lynn/2007f.html#12 FBA rant
https://www.garlic.com/~lynn/2007g.html#28 Jim Gray Is Missing
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Linux: The Completely Fair Scheduler Newsgroups: alt.folklore.computers Date: Sun, 22 Apr 2007 11:55:24 -0600Linux: The Completely Fair Scheduler
from above
Regarding the actual implementation, Ingo explained, "CFS's design" is
quite radical: it does not use runqueues, it uses a time-ordered rbtree
to build a 'timeline' of future task execution, and thus has no 'array
switch' artifacts (by which both the vanilla scheduler and RSDL/SD are
affected).
... snip ...
deja vu from '68 ... nearly 40 yrs ago as undergraduate doing resource
manager for cp67 and default policy was fair share ... misc. posts
about resource manager and fair share scheduling policy
https://www.garlic.com/~lynn/subtopic.html#fairshare
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: John W. Backus, 82, Fortran developer, dies Newsgroups: alt.folklore.computers Date: Mon, 23 Apr 2007 06:08:46 -0600jmfbahciv writes:
re:
https://www.garlic.com/~lynn/2007h.html#76 John W. Backus, 82, Fortran developer, dies
note ... part of this is also cost/benefit analysis ... does the net
added benefit of high availability justify the incremental cost
... especially as some of the techniques for providing higher
availability has come down in cost ... i.e. earlier post
https://www.garlic.com/~lynn/2007h.html#53 John W. Backus, 82, Fortran developer, dies
for slightly different discussion related to this topic ... posting in
different n.g.
https://www.garlic.com/~lynn/2007h.html#67 SSL vs. SSL over tcp/ip
about several compensating processes/techniques for improving the
availability of the original payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
other related past posts about payment gateway as original SOA (service
oriented architecture) and possibly taking 4-10 times the effort to
turn a well-tested application into a "service"
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2007f.html#37 Is computer history taught now?
https://www.garlic.com/~lynn/2007g.html#51 IBM to the PCM market(the sky is falling!!!the sky is falling!!)