From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Z/90, S/390, 370/ESA (slightly off topic) Newsgroups: comp.lang.asm370 Date: Tue, 20 Feb 2001 23:26:48 GMTrichgr@panix.com (Rich Greenberg) writes:
random refs:
https://www.garlic.com/~lynn/94.html#11
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Z/90, S/390, 370/ESA (slightly off topic) Newsgroups: comp.lang.asm370 Date: Wed, 21 Feb 2001 13:15:14 GMTRandy Hudson writes:
also in 158 time-frame ... there were people that were taking some extremely performance sensitive fortran programs running on 195s and hand-coding selected performance sensitive sequences directly in 370 assembler and getting 10:1 speedups (or better i.e. comparing hand coded machine language vis-a-vis fortran compiler generated machine language). I believe that a lot of the work on assemblers better than assembler F ... were people involved in high performance computing optimization (a lot of the stuff that went into assembler H and some of the other varieties).
there was some work that on 158/168 ... that if the machine didn't have to check if i-stream code modifications that you might get a 2:1 speedup ... the i-stream fetched a double word of instructions and started some overlap of decode and execution. however, 360/370 allowed that the previous instruction could do a store and totally overlay the following instruction ... which introduced some stall in the amount of overlap that could be done (which goes away with direct microcode implementation).
in the vertical microcoded machines things were frequently expressed in the number of native engine instructions needed to implement 360/370 instruction set. because of the totally different nature of horizontal microcode, things were typically expressed in terms of machine cycles. I remember that one of the differences going from 165 to 168 was the reduction of avg. machine cycles per instruction from 2.1 to 1.6 (while the cycle time didn't change). The 3033 further reduced that to about 1.0 (and simple straight-forward translation of 370 instructions sequences to microcode showed little or no performance improvement).
sometimes it was possible to get 30:1 speedup with hand-generated machine code compared to fortran generated machine code.
in any case, on 158 going from fortran to microcode there were larger number of factors compared to simulating one machine language instruction set in the same or similar machine language instruction set.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Z/90, S/390, 370/ESA (slightly off topic) Newsgroups: comp.lang.asm370 Date: Wed, 21 Feb 2001 14:15:42 GMTRandy Hudson writes:
Going from running a non-microcded version of the APL interpreter against a particular APL program compared to directly implementing the function of an APL program in microcode might show a 100:1 improvement (or better).
In the early 70s, the internal HONE system (all field, branch and many hdqtrs function) was totally a CMS/APL and then APL/CMS environment with some amount of shared segments for both CMS and APL. The CP/67 version of HONE and the early VM/370s version didn't provide for any graceful way of transitioning between the APL environment and a non-APL environment (because of the way that shared segments were implemented).
Some amount of the HONE delivered applications were APL simulation models. To a large extent these served the business functions that today are implemented in spreadsheets allowing business people to pose what-if questions. Other APL applications where configurators ... which walked the salesman thru a sequence of Q&A for specifying a machine (starting with the 370/125 it was no longer possible for a salesman to order/specify a machine w/o the help of a HONE configurator; again a type of function today that would be frequently implemented in a spreadsheet).
HONE also delivered some number of very sophisticated performance modeling tools, including allowing a SE to take detailed customer performance data and feed it to the modeling tools and be able to provide reasonable guestimates for the customer as to the effect of faster processor and/or more memory would have on the customer's workload thruput.
For some of these more sophisticated application models, it was possible to recode the APL implementation in Fortran, compile the Fortran and show that the compiled Fortran executed possibly 50 times faster than the interpreted APL version.
In any case, I had done an enhancement to CP/67 for generalized shared memory support late in the CP/67 cycle (release 1 of VM/370 was already available to customers). For HONE, I converted that to VM/370 r2plc15 base. The use of this at HONE was so that they could transparently slip back&forth between the generalized end-user environment (implemented in APL/CMS) and the growing number of performance sensitive applications that had been recoded in Fortran.
A small subset of this was incorporated into the product in release 3 of VM/370 called discontiguous shared segments. However, it didn't include the memory mapped file system and/or the ability to memory map arbritrary CMS executables/modules (including the abilility for CMS modules to be generated with portions of the memory-mapped region specified as shared memory ... and for CMS kernel to automatically invoke the shared memory specification as part of invoking a CMS executable).
Totally, unreleated, I recently ran into somebody who had obtained the rights to a descendent of the HONE performance modeling tool and had taken an APL->C translator to it and was doing a lot of performance analysis work (for instance, it had some interesting things to say about the performance costs of turning on parallel sysplex).
random refs:
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/97.html#4
https://www.garlic.com/~lynn/98.html#23
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/99.html#150
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#22
https://www.garlic.com/~lynn/2000f.html#30
https://www.garlic.com/~lynn/2000f.html#62
https://www.garlic.com/~lynn/2000g.html#27
https://www.garlic.com/~lynn/2001.html#0
https://www.garlic.com/~lynn/2001.html#26
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Z/90, S/390, 370/ESA (slightly off topic) Newsgroups: comp.lang.asm370 Date: Wed, 21 Feb 2001 14:52:36 GMTRandy Hudson writes:
refs:
https://www.garlic.com/~lynn/2000d.html#0
a directly microcoded function on the 158 would have some more latitude to play games vis-a-vis cycle stealing of the channel function.
from much earlier in this same thread:
https://www.garlic.com/~lynn/2001b.html#69
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: what makes a cpu fast Newsgroups: comp.arch Date: Wed, 21 Feb 2001 15:19:20 GMTcecchi@signa.rchland.ibm.com (Del Cecchi) writes:
the internet didn't really get gateway function until the great cut-over on 1/1/83 from NCP/IMPs to IP (at which time arpa/internet was approx. 250 nodes and the internal network was nearly 1000 nodes).
random refs:
https://www.garlic.com/~lynn/2001b.html#81
https://www.garlic.com/~lynn/2000e.html#18
https://www.garlic.com/~lynn/internet.htm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: what makes a cpu fast Newsgroups: comp.arch Date: Wed, 21 Feb 2001 16:02:54 GMTcecchi@signa.rchland.ibm.com (Del Cecchi) writes:
local user could interact in such a way that a consistency check was made with regard to a locally shadowed copy.
TOOLSRUN ran on top of VNET ... similar to the way that much of usenet is currently configured running on top of tcp/ip (as opposed to the uucp usenet flavors).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS/360 (was LINUS for S/390) Newsgroups: bit.listserv.ibm-main Date: Wed, 21 Feb 2001 16:18:26 GMTlistmanager@SNIPIX.FREESERVE.CO.UK (Roger Bowler) writes:
360/67 had EC mode, control registers, virtual memory, channel director supporting more than 7 channels (i.e. in 360/67 duplex channel directory provide capability for all processors to access all channels), 24&32bit addressing. In fact, the channel director capability disappeared in 370 and don't really show up again until 308x.
370 prior to virtual memory & ec-mode announce had a couple bits defined in cr0 not associated with ec-mode, multiprocessing, or virtual memory. I don't remember how many of the other non-ec &/or possibly later announced CR functions were in available at initial 370 first customer ship.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: LINUS for S/390 Newsgroups: bit.listserv.ibm-main Date: Wed, 21 Feb 2001 20:34:34 GMTastaller@TOTALSYSTEM.COM (Al Staller) writes:
also, there was a problem retrofitting the full DAT function to the 165 (as defined in the base 370 document for all machines). 165 engineers claimed that the engineering to retrofit some of the DAT features to the 165 would delay the availability by six months and significantly increase the complexity of the hardware field upgrade ... as a result they got dropped from the 370 announcement from all models (and in the case of 145 which already had done the full architecture implementation, the eventual implementation shipped to customers had to be stripped back to only those things that the 165 would be support).
One of the features that got dropped were all the selective invalidate instructions, IPTE, ISTE, and ISTO .... leaving just PTLB.
random refs:
https://www.garlic.com/~lynn/2000e.html#15
https://www.garlic.com/~lynn/2000f.html#55
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Server authentication Newsgroups: comp.security.misc Date: Wed, 21 Feb 2001 20:18:14 GMTPatrick.Knight@currentechnologies.com (Patrick Knight) writes:
two parties with no prior relationship may frequently not have a context for the externail identification and so the certificate means nothing.
in general, until i set-up an account with some random ISP for my name .... the ISP is unlikely to let me connect and provide me with service just because I have sent them a certificate. they will typically want some context where the account incurs charges and process for them to be payed for the incurred charges.
looking at the internet, possibly 99.9999999999% of instances of client/server authentication events that goes on in the internet world occurs today using some sort of radius; i.e. consumer calls their ISP (server), connects, performs some hand-shaking authentication procedure and is granted access to the internet.
This occurs tens of millions and possibly hundreds of miliions of times each day.
This process involving 99.999999999% of the existing internet client/server authentication events typically involves some form of userid/account and password.
The radius infastructure involving 99.99999999% of the existing internet client/server authentication events could relatively trivially support public/private key and still not require a certificate.
A typical certificate-based authentication process, involves some sort of secure message, which has a digital signature appended, and then a certificate appended to the combined secure message and digital signature. The recipient then uses the public key bound in the certificate to verify the digital signature. This implies that the message was presumably transmitted by the owner of the certificate and that the message hasn't been modified in transit. The recipient then typically uses a combination of the contents of the secure message, the certificate and local administrative context to perform some action in conjunction with the authentication (typically authentication isn't being performed in the total absence of any reason .... i.e. typical business process scenerio is using authentication in conjunction with some sort of authorization, typically requiring a business context).
Now, a trivial way of adding an upgrade option for 99.99999999% of the existing internet client/server authentication events is to record the client's public key in place of a password or other authentication mechanism.
In this scenerio, the client forms a secure message, digital signes it, and transmits the secure message plus digital signature (w/o needing an associated certificate, note this can represent a real savings on traffic bloat where the size of a digital certificate is an order of magnitude bigger or more than the combined secure message plus digital signature). The server recipient obtains the transaction context pointer from the secure message, retrieves the context which also contains the public key; uses the public key to verify the digital signature ... and if it verifies, proceeds to perform any associated service/authorization function.
No certificate is needed. Certificates are really targeted at processes which can be carried out w/o regard to any additional context and/or needing a prior business relationship (i.e. say a server that exists solely for the purpose of performing authentication functions ... but otherwise does absolutely nothing).
A physical world analogy to the offline use of certificates are the old business paper checks that had signing limits (i.e. in theory the value bound into the paper check/certificate had some meaning when the check was being used at some offline site). As the world simultaneously moved to both electronic as well as online, there was never any real business requirement for a offline, electronic check (i.e. another word for certificate) ... but went to things like online, electronic, business purchase cards that not only supported business rules regarding individual purchases, but also things like aggregate purchase rules (no longer found the scam where somebody signed 200 $5,000 checks in order to make a $1million purchase).
The issue with regard to being online ... is that if the current business context has to be accessed (aka account record), that it is more efficient to use a authentication process with public key & other timely information bound in the account record; that to rely on stale data bound possibly bound in a redundant and superfluous certificate.
some of the attached also discusses the scenerio of a "secure" DNS providing a public key for server authentication; vis-a-vis relying on a server certificate. In part, most SSL server certificates supposedly fullfil a role of adding security because of possible security problems with the domain name infrastrucutre. However, a CA, issueing an SSL domain name certificate, has to rely on the authoritative agency for domain names to validate the information that will be bound into a certificate (i.e. the domain name infrastructure, which supposedly has the security problems justifying the certificate in the first place). A security improvement for the domain name system involves registering a public key at the same time the domain name is registered. However, 1) improving domain name system so that the CA can better trust it as the authoritative agency ... also eliminates one of the main reasons for needing a server SSL certificate, and 2) if the security improvement with registered public keys for the domain name system so it can be better trusted to provide timely information, then it can also be trusted to provide the registered public key (making a server SSL certificate, with possibly stale information from the past, redundant and superfluous).
random refs:
https://www.garlic.com/~lynn/aadsm2.htm#inetpki
https://www.garlic.com/~lynn/aadsm2.htm#account
https://www.garlic.com/~lynn/aepay4.htm#comcert6
https://www.garlic.com/~lynn/aepay4.htm#comcert7
https://www.garlic.com/~lynn/aepay4.htm#comcert8
https://www.garlic.com/~lynn/aepay4.htm#rfc2807b
https://www.garlic.com/~lynn/aepay4.htm#rfc2807c
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Server authentication Newsgroups: comp.security.misc Date: Wed, 21 Feb 2001 23:58:01 GMTaka
... for the 99.9999999% of the current "client" authentication events that goes on today in the internet, it is a relatively trivial upgrade to support public key authentication w/o requiring certificates (i.e. add digital signature authentication to radius and public key registration option in lieu of passwords).
... for the 99.9999999% of the current "server" authentication events (i.e. SSL domain name certificates) that goes on today in the internet ... a justification has been they are needed because of domain name system integrity problems; however since the domain name infrastructure is the authoritative agency that a CA has to check with in order to validate the domain name for issuing a certificate .... it is in the interest of CAs to improve the integrity of the doman name infrastructure (so CAs can trust the domain name information that CAs need to validate when issueing a certificate). The current RFCs for improving domain name infrastructure includes registering a public key with the registration of the domain name. However fixing integrity of domain name system eliminates the justification for the existing SSL domain name certificates ... as well as providing a trusted method for the domain name infrastructure to distribute trusted public keys w/o requiring certificates (not only are the SSL domain name certificates no longer justified, but they also become redundant and superfluous).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Memory management - Page replacement Newsgroups: comp.os.misc,alt.folklore.computers Date: 22 Feb 2001 11:24:34 -0700Brian Inglis writes:
The whole purpose of the algorithm is hopefully to be able to discriminate between pages that have been recently referenced and those that haven't been recently referenced (the set of all pages with reference bit off and the set of all pages with the reference bit on). The algorithm has some interesting dynamic adaoptive characteristic because as the demand for pages goes up, the interval that pages can survive in memory decreases.
However, for various configurations ... the period of time it takes the hand to completely cycle through all pages may be excessively large ... and the algorithm starts to cycle with all pages having the bit on with all pages having the bit off; i.e. the idea of the algorithm is to be able to descriminate between pages that have been used more recently versis pages that have been used less recently. If the time for the one hand to cycle thru all pages is long enuf, there may not be any descrimination capability (all pages look the same).
Two handed clock allows more degrees of latitude in the period between the time a reference bit is turned off and the time it is examined ... while still preserving the dynamic adaptive characteristic of single handed clock ... rather than the reset hand being exactly equal to the total number of available pages .... the reset hand can be some fraction of the total number of available pages. This has some degree of more control over the interval between the time the reference bit is turned off and the time it is examined ..... which in theory allows better descrimination in being able to "rank" pages that have been more recently referenced from those that have been less recently referenced (i.e. for some environments, the single handed clock interval, with full rotation of all pages is too long an interval to provide any meaningful descrimination/categorization of pages into the two groups ... they all appear to be in one group or the other).
However, both one handed & two handed clock will degenerate to FIFO under various circumstances. There is a variation that I also did on two-handed clock that has the interesting property of degenerating to random ... rather than FIFO; i.e. many of the conditions where the algorithm degenerates to FIFO is where pages are being used in a cyclic manner and you therefor guarantee that every page will be faulted on every cycle. Random works much better in this scenerio, since it tends to reduce to only a random number of pages are faulted on every cycle.
this was all 30+ years ago.
there was stanford PhD by carr on clock in the early 80s.
random refs:
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/93.html#4
https://www.garlic.com/~lynn/94.html#01
https://www.garlic.com/~lynn/94.html#1
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/94.html#4
https://www.garlic.com/~lynn/94.html#6
https://www.garlic.com/~lynn/94.html#10
https://www.garlic.com/~lynn/94.html#14
https://www.garlic.com/~lynn/94.html#49
https://www.garlic.com/~lynn/94.html#54
https://www.garlic.com/~lynn/95.html#12
https://www.garlic.com/~lynn/96.html#0a
https://www.garlic.com/~lynn/96.html#10
https://www.garlic.com/~lynn/97.html#0
https://www.garlic.com/~lynn/97.html#28
https://www.garlic.com/~lynn/98.html#2
https://www.garlic.com/~lynn/98.html#17
https://www.garlic.com/~lynn/99.html#18
https://www.garlic.com/~lynn/2000.html#86
https://www.garlic.com/~lynn/2000f.html#9
https://www.garlic.com/~lynn/2000f.html#10
https://www.garlic.com/~lynn/2000f.html#32
https://www.garlic.com/~lynn/2000f.html#33
https://www.garlic.com/~lynn/2000f.html#34
https://www.garlic.com/~lynn/2000f.html#36
note ... if this was comp.arch ... the question would assumed to be a
home work assignment .... and answering home work assignments for
other people is strickly frowned upon.
L. Belady, A Study of Replacement Algorithms for a Virtual Storage
Computer, IBM Systems Journal, v5n2, 1966
L. Belady, The IBM History of Memory Management Technology, IBM
Journal of R&D, v25n5
R. Carr, Virtual Memory Management, Stanford University,
STAN-CS-81-873 (1981)
R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm
for Virtual Memory Management, ACM SIGOPS, v15n5, 1981
P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6,
jan80
J. Rodriquez-Rosell, The design, implementation, and evaluation of a
working set dispatcher, cacm16, apr73
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory,
IBM Systems Journal, v10n3, 1971
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Memory management - Page replacement Newsgroups: comp.os.misc,alt.folklore.computers Date: 22 Feb 2001 14:01:40 -0700Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What does tempest stand for. Newsgroups: sci.crypt Date: Thu, 22 Feb 2001 22:19:21 GMTadmin@hotmail.com (Mark Healey) writes:
also see security glossary at
https://www.garlic.com/~lynn/secure.htm
TEMPEST
(O) A nickname for specifications and standards for limiting the
strength of electromagnetic emanations from electrical and electronic
equipment and thus reducing vulnerability to eavesdropping. This term
originated in the U.S. Department of Defense. [Army, Kuhn, Russ] (D)
ISDs SHOULD NOT use this term as a synonym for 'electromagnetic
emanations security'. [RFC2828] The study and control of spurious
electronic signals emitted by electrical equipment, such as computer
equipment. [AJP][NCSC/TG004][TCSEC] (see also approval/accreditation,
preferred products list) (includes compromising emanations,
emanations, emanations security, emission security)
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: LINUS for S/390 Newsgroups: bit.listserv.ibm-main Date: Fri, 23 Feb 2001 16:02:48 GMTTed.Spendlove@THIOKOL.COM (Ted Spendlove) writes:
as part of calibrating the resource manager (especially its dyanmica adaptive characteristics), I built an automated benchmark procedure & did a test suite of 2000 benchmarks that took about 3 months elapsed time to run Part of the test suite was varying available memory, processer speed, i/o capacity, and workload. Part of this was some of the genesis for automated operator, some of the tests involved rebuilding the kernel and then restarting the system and starting the next workload.
Some of the workload was nominally expected ... but there were also stress-test workloads and extreme outlayers ... like a paging intensive workload, the most extreme version had a mean service time of 1 second for a page fault (saturate the page i/o subsystem around 300-350 page i/os per sec. ... and build up a queue of 300 page requests ... new page requests would get put in the queue of 300+ and take a second to service).
the benchmarks were also use to cross-calibrate the predictor ... the APL analytical performance model available to the field on HONE (with a customer workload characterization, the predictor would say stuff about what the effects of faster cpu, more memory, or more disks would have).
in any case, being able to run the cambridge 155 at both 155 speeds and 145 speeds aided in the calibration process.
as a measure of the ability to dynamically adapt, a kernel implementing an early version (along with a bunch of other changes) had been made available to a special large customer for 145. about 10 years later the customer had a wide deployment on 3083s(?) and called to ask about SMP support (they were still running the same kernel/code, and for ten years it had been doing its job of dynamically adapting to a wide range of processors speeds, available memory, i/o configurations and workload).
random refs:
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/95.html#0
https://www.garlic.com/~lynn/95.html#1
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/99.html#150
note the following from
https://www.garlic.com/~lynn/2001c.html#2
A small subset of this was incorporated into the product in release 3
of VM/370 called discontiguous shared segments. However, it didn't
include the memory mapped file system and/or the ability to memory map
arbritrary CMS executables/modules (including the abilility for CMS
modules to be generated with portions of the memory-mapped region
specified as shared memory ... and for CMS kernel to automatically
invoke the shared memory specification as part of invoking a CMS
executable).
also picked up at the same time was the automated operator stuff
(developed for doing automated benchmarking) at the same time for
inclusion into release 3.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Something wrong with "re-inventing the wheel".? Newsgroups: alt.folklore.computers,comp.os.cpm Date: Sat, 24 Feb 2001 18:14:16 GMTCBFalconer writes:
problem, of course was that the qwerty stuff was/is so prevalent.
one of the early standard keyboard pointers were two sliders at the space bar for the thumbs ... one controlled X coordinate on the screen and the other the Y coordinate ... problem was that it required coordinating both thumbs to position the pointer.
advances in technology allowed single small control at the junction of the G/H/B keys (found on some laptops).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS/360 (was LINUS for S/390) Newsgroups: bit.listserv.ibm-main Date: Sun, 25 Feb 2001 16:04:30 GMTlistmanager@SNIPIX.FREESERVE.CO.UK (Roger Bowler) writes:
IBM System/360 Model 67 Reference Data Floating-point Double Precision Instructions Load Mixed LX RX R1,D2(X2,B2) 74 Add Mixed Normalized AX RX R1,D2(X2,B2) 76 Add Double Normalized ADDR RR R1,R2 26 Add Double Normalized ADD RX R1,D2(X2,B2) 66 Subtract Mixed Normalized SX RX R1,D2(X2,B2) 77 Subtract Double Normalized SDDR RR R1,R2 27 Subtract Double Normalized SDD RX R1,D2(X2,B2) 67 Multiply Double Normalized MDDR RR R1,R2 25 Multiply Double Normalized MDD RX R1,D2(X2,B2) 65 Store Rounded (Short) STRE RX R1,D2(X2,B2) 71 Store Rounded (Long) STRU RX R1,D2(X2,B2) 61 Model 67 Instruction Codes Instruction Mnem Type Exception Code Load Multiple Control LMC RS M, A, S, D P B8 Store Multiple Control STMC RS M, P, A, S B0 Load Real Address LRA RX M, A, S B1 Branch and Store BASR RR 0D Branch and Store BAS RX 4D Search List (RPQ) SLT RS P, A, S, Relo A2 Notes: A Addressing Exception D Data Exception M Privileged Operation Exception P Protection Exception S Specification Exception..........................
also
control registers:
CR0 segment table register (for dynamic address translation) CR1 unassigned CR2 translate exception address register CR3 unassigned CR4 Extended Mask Registers for I/O Control Masks CR5 unassigned CR6 bits 0,1 Machine Check Mask Extensions for channel controller bits 2,3 reserved bits 4-7 unassigned bit 8 extended control mode bit 9 configuration control bit, defines when partitioning can take place bts 10-23 unassigned bits 24-31 external interruption masking CR7 unassigned CR6-14 partitioning sensing registers, see 2167 description for register layout CR15 unassigned.......................................................
The card also contained the sense byte definitions for
1052/2150, 2540/1821, 1403/1443, 1442/2501/2520, 2671/2822, 2400, 2311/2841, 2301/2820, 2250, 2260, 2280, and 2282
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch,comp.arch.storage Date: Mon, 26 Feb 2001 16:31:48 GMT"Bill Todd" writes:
long ago and far away comparison of about 35 years ago to 20 years ago:
https://www.garlic.com/~lynn/94.html#43
the above possibly represented 90+% of the business critical database processing during the period.
virtual memory system from 20 years ago could have several thousand 4k pages and an application could have hundreds of 4k pages. "single" paging such an application started to make less and less sense. any benefit from single paging such an application with 4k pages was more than offset by the arm contention issues.
The above claimed that basically from the mid-60s to 1980, real storage and cpu increased by a factor of 50, disk transfer increased by a factor of 10 and avg arm access increased by a factor of 4. In effect, by 1980, the relative system thruput of arm performance had decreased by a factor of 10. With such a significant increase in memory and processing speed ... and some increase in disk transfer speed; it was possible to "waste" (trade-off) significant amounts of cpu and real storage ... and modest amounts of disk transfer attempting to maximize the effectiveness of each arm access.
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch,comp.arch.storage Date: Mon, 26 Feb 2001 20:11:43 GMTgah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
2301 was effectively a 2303 fixed head device that read/wrote 4 tracks in parallel for approx. 1.5mbyte transfer (mid 60s). for paging, a pair of tracks were formated with 9 4k pages.
2305 was follow-on. both 3330 and 2305 had rotational position sensing. This wasn't so much for drive thruput ... but for multiple device thruput sharing the same i/o channel. prior to 3330/2305, the sequence was arm and then select record. The arm positioning operation could be done while not making the i/o channel busy (aka multiple concurrent operations could proceed concurrently); but the select record operations would "busy" the channel (i.e. from the time the arm was in position until the correct record rotated under the head). The RPS feature allowed operations to release the channel for other i/o operations until the correct record rotated under the head.
what 2305 had ... in addition was "multiple exposures" ... basically instead of a single I/O request interface, the 2305 had eight logical i/o request interfaces ... with the 2305 controller servicing the requests in optimal hardware order (i.e. effectively could approx. out of order execution).
There was also a "fast" 2305 with heads 180 degree offset cutting the avg. rotational delay in half (i.e. first head that the record got to would read it).
There was also a really special 2305 that may have only been available on special contract from intel ... it was all electronic simulating a 2305 with no arm access and no rotational delay. it had a special model number ... i think 1655(? or some such).
random refs:
https://www.garlic.com/~lynn/2001b.html#69
https://www.garlic.com/~lynn/2000c.html#34
https://www.garlic.com/~lynn/2000d.html#7
note RPS only helped channel busy when selecting specific record. lots of that generation of disk used the CKD feature to search for the correct record based on pattern match (as opposed to selecting specific record). when doing multi-track search, the channel could be continuously busy for as many revolutions as there were tracks in a cylinder (or arm position). example described in the following:
https://www.garlic.com/~lynn/94.html#35
https://www.garlic.com/~lynn/93.html#29
https://www.garlic.com/~lynn/97.html#16
https://www.garlic.com/~lynn/99.html#75
some configuration comparisons
https://www.garlic.com/~lynn/95.html#8
note in the previous reference
https://www.garlic.com/~lynn/95.html#10
where i made the claim that the relative system performance of disk had declinded by at least a factor of five over a period of 12-15 years (at the time i wrote the original), a lot of people in the disk division got upset and they assigned a performance group to investigate. After some study, they came to the conclusion that the actual decline in performance was worse when the effects of RPS-miss were taken into consideration in a large busy environment.
RPS allowed the device to give up use of the channel while the disk was rotating but it expected to re-acquire the channel when the head was in position. However, if the channel was busy with some other operation, there was an RPS-miss and the disk had to perform a full revolution and try again. This was before full-track buffers which didn't show up until the 3880-13 disk controller about 1983 or 1984 (on the road at the intel developer's conference and don't have access to some data ... if any body else is here, i'm on a panel at 2pm tomorrow).
--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: On RC4 in C Newsgroups: comp.lang.ada,sci.crypt,talk.politics.crypto Date: Wed, 28 Feb 2001 05:07:57 GMTWilliam Hugh Murray writes:
there was also june 23rd, 1969 with the separate charging for people services (aka unbundling; which had been packaged for "free" as part of customer support).
following has comments supposedly attributed to testimony in the trial
https://www.garlic.com/~lynn/94.html#44
--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What is "IBM-MAIN" Newsgroups: comp.lang.asm370 Date: Fri, 02 Mar 2001 16:42:10 GMTTom Anderson writes:
bit.listserv.ibm-main
--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Something wrong with "re-inventing the wheel".? Newsgroups: alt.folklore.computers,comp.os.cpm Date: Sat, 03 Mar 2001 00:11:27 GMTsome comments about re-inventing the wheel ... also implies re-inventing the same bugs:
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
on a panel discussion at:
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What is "IBM-MAIN" Newsgroups: comp.lang.asm370 Date: Sat, 03 Mar 2001 16:08:10 GMTBill Becker writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS for IBM 370 Newsgroups: comp.lang.asm370 Date: Sun, 04 Mar 2001 01:59:08 GMTlwinson@bbs.cpcn.com (lwin) writes:
random refs (including URL for picture of 9-drive 2314 string)
https://www.garlic.com/~lynn/2001b.html#7
random other refs:
https://www.garlic.com/~lynn/95.html#8
https://www.garlic.com/~lynn/95.html#10
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Use of ICM Newsgroups: comp.lang.asm370 Date: Sun, 04 Mar 2001 05:24:01 GMT"Robert L. Cochran Jr." writes:
Consider this code L 8,ADCON1 LH 9,LENGTH1 L 2,ADCON2 LR 3,9 ICM 2,X'40' MVCL 2,8 What is the ICM doing and why would an applications programmer want to use it?MVCL will use the high byte of 2 as pad character if the destination length is longer than the source length.
x'40' is a ebcdic blank
however,
a) the icm is coded wrong
b) the mvcl has same length for source & destination (aka the lr 3,9).
icm should be something like
icm 2,b'1000',=x'40'
i.e. take the data at storage location =x'40' and load it into high byte of register 2.
i would hope that the assembler would generate an error for the above icm since there is no 3rd argument ... & with the x'40' as the 2nd argument, it is likely that it takes it to mean the 2nd byte of register 2, not the first byte.
icm also sets the condition code based on whether the loaded value(s) were zero or non-zero i.e. can be used in place of load followed by ltr if just interested in zero/non-zero c.c.
icm 8,b'1111',adcon1
bz error
icm 2,b'1111',adcon2
bz error
would test to see if adcon1 & adcon2 are zero
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Use of ICM Newsgroups: comp.lang.asm370 Date: Sun, 04 Mar 2001 05:56:49 GMT"Robert L. Cochran Jr." writes:
Consider this code L 8,ADCON1 LH 9,LENGTH1 L 2,ADCON2 LR 3,9 ICM 2,X'40' MVCL 2,8and besides it is the wrong reg, the pad byte goes in the high byte of the source length (aka register 9) ... only 24 bits are used in the source and destination length registers.
one might also do a SR 9,9 and an icm,b'0011',length1 with a branch.
so possibly (if there were any chance of source length being less than destination length).
icm 8,b'1111',adcon1
bz error
icm 2,b'1111',adcon2
bz error
sr 9,9
icm 9,b'0011',length1
bz error
lr 3,9
icm 9,b'1000',=c' '
mvcl 2,8
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Use of ICM Newsgroups: comp.lang.asm370 Date: Sun, 04 Mar 2001 06:04:13 GMTAnne & Lynn Wheeler writes:
random recent refs:
https://www.garlic.com/~lynn/2001.html#70
https://www.garlic.com/~lynn/2001b.html#38
https://www.garlic.com/~lynn/2001c.html#11
https://www.garlic.com/~lynn/2001c.html#10
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Foolish Dozen or so in This News Group Newsgroups: alt.hacker,sci.crypt Date: Sun, 04 Mar 2001 07:06:25 GMT"Scott Fluhrer" writes:
couple URLs alta-vista found on (sector & record) sparing
http://til.info.apple.com/techinfo.nsf/artnum/n24530
https://web.archive.org/web/20001121152400/http://til.info.apple.com/techinfo.nsf/artnum/n24530
http://www.eros-os.org/design-notes/DiskFormatting.html
http://docs.rinet.ru:8080/UNT4/ch28/ch28.htm
http://mlarchive.ima.com/winnt/1998/Nov/2142.html
https://web.archive.org/web/20010222201650/http://mlarchive.ima.com/winnt/1998/Nov/2142.html
misc. words from scsi standard
Any medium has the potential for defects which can cause user data
to be lost. Therefore, each logical block may contain information
which allows the detection of changes to the user data caused by
defects in the medium or other phenomena, and may also allow the data
to be reconstructed following the detection of such a change. On some
devices, the initiator has some control some through use of the mode
parameters. Some devices may allow the initiator to examine and
modify the additional information by using the READ LONG and WRITE
LONG commands. Some media having a very low probability of defects
may not require these structures.
Defects may also be detected and managed during execution of the
FORMAT UNIT command. The FORMAT UNIT command defines four sources of
defect information. These defects may be reassigned or avoided during
the initialization process so that they do not appear in a logical
block.
Defects may also be avoided after initialization. The initiator
issues a REASSIGN BLOCKS command to request that the specified logical
block address be reassigned to a different part of the medium. This
operation can be repeated if a new defect appears at a later time.
The total number of defects that may be handled in this manner can be
specified in the mode parameters.
Defect management on direct-access devices is usually vendor
specific. Devices not using removable medium typically optimize the
defect management for capacity or performance or both. Devices that
use removable medium typically do not support defect management (e.g.,
some floppy disk drives) or use defect management that is based on the
ability to interchange the medium.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Massive windows waisting time (was Re: StarOffice for free) Newsgroups: alt.folklore.computers Date: Sun, 04 Mar 2001 17:34:53 GMTab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
feb 30th?
random refs on 360/195
https://people.computing.clemson.edu/~mark/acs_technical.html
http://csep1.phy.ornl.gov/ov/node12.html
https://web.archive.org/web/20020208203909/http://csep1.phy.ornl.gov/ov/node12.html
http://www.cs.ucl.ac.uk/research/darpa/internet-history.html
https://web.archive.org/web/20020227225902/http://www.cs.ucl.ac.uk/research/darpa/internet-history.html
http://www.crowl.org/Lawrence/history/computer_list
http://www.cmc.com/lars/engineer/comphist/model360.htm
https://web.archive.org/web/20010610042541/http://www.cmc.com/lars/engineer/comphist/model360.htm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Foolish Dozen or so in This News Group Newsgroups: alt.hacker,sci.crypt Date: Sun, 04 Mar 2001 23:22:02 GMT"Douglas A. Gwyn" writes:
there is some chance (in unix) of issuing fsck a couple times and waiting a minute or so between passes.
of course even this changes if you are dealing with log-structured filesystem ... which attempts to always write to a new location.
random ref:
https://www.garlic.com/~lynn/2000.html#93
for many of these scenerios it just about boils down to some filesystem enhancement that meets some zeroization standard when blocks are released (and uses sequences of patterns that satisfy some criteria based on knowledge of magnetic properties of disk surface).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: mini-DTR tapes? (was Re: The longest thread ever ...) Newsgroups: alt.folklore.computers Date: Mon, 05 Mar 2001 02:07:17 GMTEric Smith <eric-no-spam-for-me@brouhaha.com> writes:
next size up held maybe up 600' of tape and typically actually came in a case (the gray plastic reels didn't have a case and tape was kept from spilling off the reel with rubber band or very smooth strip of plastic). the 600' reels may have been 1/2 the diameter of a standard 2400' tape reel.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Mon, 05 Mar 2001 21:44:00 GMT"Lyalc" writes:
hardware tokens can be accessed with pin/secret-key and then do public/private key digital signatures for authentication ... but there is no "sharing" of the pin/secret-key.
taken to the extreme, biometrics is a form of secret key .... and while compromise of pin/secret-key in a shared-secret infrastructure involves issuing a new pin/secret-key ... current technology isn't quite up to issuing new fingers if a biometric shared-secret were compromised (aka there are operational differences between shared-secret paradigms and non-shared-secret paradigms ... even if both have similar pin/secret-key/biometrics mechanisms).
in an account-based infrastructure that already uses some form of authentication (pins, passwords, mothers-maiden-name, #SSN, etc), it is relatively straight-forward technology upgrade to public/private key authentication ... w/o requiring business process dislocation that many PKIs represent.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch Date: Mon, 05 Mar 2001 21:59:22 GMTshocking@houston.rr.com (Stephen Hocking) writes:
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971
and in the mid-70s a product called VS/Repack based on the work was released (given large multiple csect/module program it would make a pass at automatic program restructuring).
recent ref to some paging stuff in a.f.c
https://www.garlic.com/~lynn/2001c.html#10
and ref from comp.arch:
https://www.garlic.com/~lynn/93.html#4
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable Newsgroups: comp.security.misc,sci.military.moderated,sci.military.naval To: sci-military-moderated@moderators.isc.org Date: Mon, 05 Mar 2001 16:02:55 GMTJohn Doe writes:
randem refs:
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch Date: Tue, 06 Mar 2001 15:06:42 GMTAnne & Lynn Wheeler writes:
random refs:
https://www.garlic.com/~lynn/93.html#5
https://www.garlic.com/~lynn/94.html#7
https://www.garlic.com/~lynn/99.html#20
https://www.garlic.com/~lynn/2000g.html#30
the other method was a modification of the virtual memory system for data collection. max. limit was specified for a process (say like 10 pages), the program was run until it had a page fault that required more than the limit, all the valid virtual page numbers were then output/recorded along with the accumulated cpu consumption, all the pages invalidated (but not necessarily removed from memory), and the application restarted (since the pages removed around in memory, although invalid, the major overhead was the additional page faults). This also had the advantage of obtaining both instruction and data refs. Various calibration and tests (especially for large programs) showed that it was a relatively close approximation to full instruction method of storage ref. capture.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Tue, 06 Mar 2001 15:30:19 GMT"Lyalc" writes:
the issue tends to be that a front-end PKI for pilots and test ... involving a small number of accounts can be shown to be less costly than modifying the production system and business processes ... especially if there are various kinds of risk acceptance having the PKI information out of synch with the production account information.
There tends to be a trade-off attempting to scale to full production where the costs of a duplicate PKI account-based infrastructure has to be evolved to the same level as the production account-based infrastructure ... along with the additional business processes maintaining consistency between the PKI accounts and the production accounts. Full scale-up might represent three times the cost of the base legacy infrastructure, rather than <5% of the legacy infrastructure.
That is independent of the costs of a hardware token ... which could be the same in either an account-base deployment or a PKI-based deployment (and i'm working hard at making the costs of producing such a token as low as possible while keeping the highest possible assurance standard). There is a pending $20b dollar upgrade issue for the existing ATM infrastructure that is going to be spent anyway (because of the DES issue). When that is spent, the difference between whether or not public-key support is also included in the new swap-in is lost in the noise of the cost of the overall swap. Existing ATM problem is further confounded that there are master DES keys in addition to the individual DES-key infrastructure (representing significant systemic risks, similar to CA root keys) ... the back-end cost savings with elimination of systemic risk and shared-secrets more than offset the incremental front-end costs of adding public key technology in a swap-out that is going to have to occur anyway.
A trivial example would be RADIUS ... which possibly represents 99.999999% of existing client authentication events around the world on the internet. A public-key upgrade to RADIUS is well under a couple hundred thousand ... resulting in a RADIUS that supports multiple concurrent methods of authentication with the connecting authority be able to specify authentication protocol on an account by account basis.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable Newsgroups: comp.security.misc,sci.military.naval Date: Tue, 06 Mar 2001 15:40:02 GMTFrank Gerlach writes:
the use of either explicit lengths and/or instantiated lengths (under the covers) in string libraries and I/O operations could go a long way to achieving that goal.
in the case of os390 ... it isn't a case of assembler ... or pls or pli, etc ... it is convention of explicit length semantics in the system. the i/o services from the highest level down to the hardware implementation uses explicit lengths or instantiated lengths (under the covers) ... so that many types of buffer problems either don't occur or are prevented at runtime (one way or another the i/o routines typically know the max. length of the buffer, either exlicitly in the i/o call or by instantiated length built into the buffer).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable Newsgroups: comp.security.misc,sci.military.naval Date: Tue, 06 Mar 2001 16:04:26 GMTAnne & Lynn Wheeler writes:
subsequently one customer acquired an ascii device (I think a plotter than simulated tty interface) that had transfers much longer than 72 characters. They patched the system to increase the maximum transfer limit to 1000(? bytes ... but didn't pick up on some of the operations involving one byte numbers (aka 0-255). They recorded a couple dozen system failures in a single day because of the resulting buffer problem.
random ref:
https://www.multicians.org/thvv/360-67.html
https://www.garlic.com/~lynn/2000.html#30
it is still possible to have basic programming errors that give rise to buffer problems ... but nothing like the numbers that are currently experienced
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch Date: Tue, 06 Mar 2001 17:12:26 GMTBernd Paysan writes:
airlines is even more interesting ... since there isn't any ongoing relationships ... just large numbers of new PNR (passenger name records) every day. Even if you only keep six months online ... there are all the new inserts every day ... plus approx. the same number of deletes for the cleaning of aged-out records. A typical system might have hundreds of thousands of PNR inserts and similar number of deletes every day ... in addition to all the nominal query/update activity.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable Newsgroups: comp.security.misc,sci.military.naval Date: Tue, 06 Mar 2001 18:11:07 GMTAnne & Lynn Wheeler writes:
a very fundamental issue turned out to be semantics and paradigms that relied on implicit conventions. The prediction with regard to buffers & strings in C was based on vulnerability analysis identifying common buffers/strings was based on implicit paradigm/semantics.
explicit &/or instanstiated tends to create higher level awareness of the issues with programmers ... so the problems just don't show up and/or runtime processes that take explicit information into account that handles situations.
i.e. it wasn't hindsight ... it was an issue of recognizing a major contributor to flaws is implicit (vis-a-vis explicit).
random other refs:
https://www.garlic.com/~lynn/94.html#11
there is sometimes a trade-off between complexity and implicit. While complexity is also a major contributor to flaws ... if explicit has to be sacrificed in favor of simplicity ... then something needs to be explicitly instantiated underneath the covers.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Tue, 06 Mar 2001 20:57:28 GMT"Lyalc" writes:
it is possible to do 3-factor authentication with no shared-secret
1) something you have
2) something you know
3) something you are
using hardware token that requires both pin & biometric .... where the business process of a pin activated card is significantly different from a business process shared-secret PIN (even if they are both PINS).
hardware token with a pin & biometric requirement can be used to meet 3-factor authentication ... and a shared-secret secret doesn't exist.
and doing it w/o impacting the business process infrastructure .... separate issue from impacting the technology implementation; pilots tend to be technology issues ... real deployment frequently are business process issues.
similar discussion in sci.crypt from last oct:
https://www.garlic.com/~lynn/2000e.html#40
https://www.garlic.com/~lynn/2000e.html#41
https://www.garlic.com/~lynn/2000e.html#43
https://www.garlic.com/~lynn/2000e.html#44
https://www.garlic.com/~lynn/2000e.html#47
https://www.garlic.com/~lynn/2000e.html#50
https://www.garlic.com/~lynn/2000e.html#51
https://www.garlic.com/~lynn/2000f.html#0
https://www.garlic.com/~lynn/2000f.html#1
https://www.garlic.com/~lynn/2000f.html#14
https://www.garlic.com/~lynn/2000f.html#15
https://www.garlic.com/~lynn/2000f.html#2
https://www.garlic.com/~lynn/2000f.html#22
https://www.garlic.com/~lynn/2000f.html#23
https://www.garlic.com/~lynn/2000f.html#24
https://www.garlic.com/~lynn/2000f.html#25
https://www.garlic.com/~lynn/2000f.html#3
https://www.garlic.com/~lynn/2000f.html#4
https://www.garlic.com/~lynn/2000f.html#7
https://www.garlic.com/~lynn/2000f.html#8
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Tue, 06 Mar 2001 21:00:50 GMT"Lyalc" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Tue, 06 Mar 2001 21:32:21 GMTAnne & Lynn Wheeler writes:
take the fully loaded existing costs of sending out a new magstripe card. the incremental costs of injecting a chip into that card is somewhere between 10% and 50% of the fully loaded costs of turning out a new card.
given the chip has the characteristics of a hardware token and doesn't implement shared-secrets ... then the financial requirement for needing a unique magstripes with unique shared-secrets is alleviated (i.e. shared-secrets are not being divulged to different financial and commercial organizations).
with x9.59 for all electronic retail transactions ... then theoretically, a single hardware token performing digitally signed, x9.59 authenticated transactions ... would not have a financial security issue if the token were used for multiple different accounts, potentially leading to a reduction in the number of cards shipped (because the infrastructure has switched away from shared-secret ... even if the token is PIN activated).
furthermore, in the current credit card world, "expiration date" is used as sort of an authentication code (i.e. you can randomly guess a credit card number with some realistic chance of it being valid .... also guessing the corresponding expiration date is much lower probability). a hardware token supporting authenticated (x9.59) transactions minimizes the requirement that an expiration date needs to be carried on the card as a kind of authentication code.
Given that the incremental cost of injecting a chip into an existing card delivery business process is at <100% than the current fully loaded costs, then if the chip can be used to eliminate at least one other card delivery ... there is a net savings to the infrastructure.
If the
1) ATM swap-out costs have no negligable difference in the cost of the kind of technology and/or number of technologies supported
2) and if the backend costs can be significantly reduced by moving to a non-shared-secret infrastructure
3) and if the elimination of shared-secrets and/or other chip related characteristics reduces the number of cards that have to go out
then there is net reduction in business costs .... independent of issues related to fraud reduction with the migration to better authenticated transactions ... and independent of any non-repudiation cost reduction issues.
i.e., by appropriately applying technology to basic business processes there is opportunities for cost reduction.
This approach is completely different from one that might look at the total costs of deploying hardware tokens and authentication technology as part of a brand new infrastructure, totally unrelated to any existing business purpose (not how much more it costs, but how much less it costs).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Wed, 07 Mar 2001 15:04:20 GMTBenjamin Goldberg writes:
in the shared-secret scenerios ... the shared-secret are registered someplace and are subject to harvesting ... aka effectively credit card numbers are treated as shared-secrets (witness all the stuff written about protecting master credit-card databases at merchant servers). Harvesting of master database files of shared-secrets is significantly simpler than defeating tamper-evident and/or beating somebody with rubber hose.
eliminating shared-secrets was the point of the discussion ... and
distinguishing shared-secret infrastructures vis-a-vis secret
infrastructures, along with the difference in fraud ROI; aka a
scenerio where somebody can electronically steal 100,000 shared-secrets in a couple of minutes ... vis-a-vis taking hrs to steal one
secret significantly changes the risk, exploit, and fraud
characteristics of an infrastructre. If it is possible to deploy a
"secret" infrastructure for approximately the same cost as a
shared-secret infrastructure and the "secret" infrastructure reduces
the fraud ROI by five to ten orders of magnitude (i.e. it takes a
thousand times as much effort to obtain 1/100000 usable fraudulant
material).
> --
> The difference between theory and practice is that in theory, theory and
> practice are identical, but in practice, they are not.
random ref
https://www.garlic.com/~lynn/2000b.html#22
the other part of the scernio ... is financial and various other commercial infrastructures strongly push that in shared-secret scenerios that the same shared-secret can't be shared across multiple different organizations with multiple different objectives i.e. an employer typically has strong requlations against "sharing" a corporate access password shared-secret with other organizations (i.e. using the same shared-secret password to access the corporate intranet as is used to access a personal ISP account and misc. random webservers around the world).
I would guess that a gov. agency would not be too please if an gov. agency employee specified their employee access (shared-secret) password ... as an access (shared-secret) password for some webserver registration site in some other country.
However, it is possible to specify a public key in multiple places and employees of one organization couldn't use the harvesting of the public keys at that organization for penetration of other organizations.
Furthermore, the rubber-hose approach is going to take quite a bit longer to obtain a million secrets and hardware tokens that correspond to the registered public keys (as compared to some of the shared-secret harvesting techniques). Lets say that the rubber-hose approach takes something like two days per ... planning, setup, capture, executing, etc and involves minimum of two people. That is four person days per secret. For a million secrets using the rubber hose method, it then takes four million person days, or 10,959 person days. By comparison some of the shared-secret harvesting techniques can be done in a couple person weeks for a million shared-secrets.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Wed, 07 Mar 2001 15:07:58 GMT"Lyalc" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Wed, 07 Mar 2001 15:19:17 GMTAnne & Lynn Wheeler writes:
sort of reminds me of the inverse of the story about why telephones and automobiles were never going to take off in the market ... both required the services of manual people operators. the solution in both cases was to make each individual person their own operator (i.e. rather than having to hire telephone operators and automobile drivers, each person was responsible for their own). The person hour projections for the number of telephone operators and automobile drivers were at least similar to the rubber-hose solution to solving the fraud yield/ROI problem with a transition from a shared-secret infrastructure to a secret infrastructure (with tokens and public keys).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Wed, 07 Mar 2001 16:58:23 GMTjunk@nowhere.com (Mark Currie) writes:
random refs to the AADS model can be found at:
https://www.garlic.com/~lynn/
I take a infrastructure that currently registeres shared-secrets and instead register public keys. No business process costs ... just some technology costs.
Given that many back-end systems have some pretty strigent security and audit requirements specifically targeted at preventing things like insiders harvesting shared-secrets .... some of those procedures and associates costs can be alliviated.
Also, in the ISP world ... a significant costs is service call associated with handling a password compromise. This is further aggravated by human factors issues with people having to track & remember a large number of different shared-secrets ... because of the guidelines about not using the same shared-secrets in multiple different domains.
i.e. the start of my comments on this thread was purely the transition of existing business processes (no new, changed &/or different business processes, no reliance on 3rd parties and all the associated new issues with regard to liability, vulnerabilities, and systemic risk, etc) from a shared-secret paradigm to a public key/secret/token paradigm ... and some deployment approaches that would result in lower costs than the current shared-secret paradigm (for instance adding a chip to an existing card being distributed might be able to save having to distribute one or more subsequent cards ... resulting in distributing hardware tokens actually costing the overall infrastructure less than distributing magstripe cards).
random systemic risk refs from thread in sci.crypt in fall of 99
https://www.garlic.com/~lynn/99.html#156
https://www.garlic.com/~lynn/99.html#236
https://www.garlic.com/~lynn/99.html#240
random other system risk refs:
https://www.garlic.com/~lynn/98.html#41
https://www.garlic.com/~lynn/2000.html#36
https://www.garlic.com/~lynn/2001c.html#34
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Wed, 07 Mar 2001 17:05:51 GMTAnne & Lynn Wheeler writes:
Lets say the person has $5k credit limit on the account associated with the use of the token ... that means that the fraud can go out and (at best) make $5k worth of fraudulent purchases. Say brand-new stuff that they then have to fence at .10 on the dollar ... yielding $500 at an outlay of $5k.
Downside is that the person may be able to report the token lost/stolen/compromised prior to the criminals being able to fully take advantage of it. There is also possibility that person already has some charges out standing so the available credit is less than the credit limit.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Wed, 07 Mar 2001 17:37:51 GMTAnne & Lynn Wheeler writes:
there are possible trade-offs; paper signatures are probably a little easier to counterfeit than hardware token digital signatures. the rubber hose bit is probably about the same. in any kind of token/public key registration environment ... it should be straight-forward to report lost/stolen/compromised (in the AADS model it should be the same as existing card lost/stolen).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch,comp.arch.storage Date: Wed, 07 Mar 2001 21:43:49 GMTJan Vorbrueggen writes:
there was some amount of work in the mid-70s that showed for configurations starting at least with 1mbyte (and possibly smaller) of real memory) that VS1 running in "handshaking" mode under VM/370 ... ran faster (& sometimes significantly so) than VS1 running "natively" on the real machine.
VM/370 used 4k paging ... and when VS1 ran under VM/370 a large virtual machine was defined for VS1 where VS1 had its total virtual memory mapped one-for-one to VM/370 virtual machine virtual memory (i.e. VS1 never had a normal page fault because all possible of its virtual pages were defined as resident ... at least in the storage of the virtual machine). As a result, no VS1 2k page faults occurred and VS1 performed no 2k page fetch/write I/O operations when running in hand-shaking mode under VM/370. Any page faults and paging that did take place were VM/370 4k page faults and 4k fetch/write I/O operations associated with the virtual storage of the VS1 virtual machine.
The trade-offs were VS1 running with 2k pages, 2k page faults, and 2k page I/O, natively on the real hardware vis-a-vis VS1 running in a virtual machine (with the various associated virtual machine simulation overheads) with VM/370 supporting 4k pages, 4k page faults, and 4k page I/O (plus the reduction in available real storage for VS1 operation represented by the VM/370 kernel and misc. other fixed storage requirements).
So, lets say for a 1mbyte 370/148 with 1mbyte of real storage
vs1 native on the real machine with 512 2k pages
vs1 in a vm/370 virtual machine, with vm/370 running on the real machine and requiring approx. 200k bytes of fixed storage ... reducing the storage remaining to VS1 to about 800kbytes ... or approximately 200 4k pages.
Even with the associated virtual machine simulation overhead and reduction in real storage, the benefits of having VM/370 manage the infrastructure using 4k pages outweighed running VS1 on the native hardware managing the infrastructure using 2k pages.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch,comp.arch.storage Date: Wed, 07 Mar 2001 22:00:27 GMTAnne & Lynn Wheeler writes:
VM/370 initially increased that by a factor of 3-4 ... but I was able to get it back down to <400.
The equivalent pathlength in VS/1 was closer to 15* the cp/67 number.
4k paging helped ... but having approx. 1/10th the pathlength per (page fault) event possibly helped more than having approx. 1/2 the number of (page fault) events (using 4k instead of 2k pages) ... overall reducing the virtual memory associated pathlength by a factor of 20.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Thu, 08 Mar 2001 14:59:11 GMT"Lyalc" writes:
that is one of the reasons after working on:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1
that I decided to start looking at a PKI (public key infrastructure, AADS model) that used the technology but preserved the existing trust & business process models.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Thu, 08 Mar 2001 15:17:07 GMTjunk@nowhere.com (Mark Currie) writes:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1
it seemed to be worthwhile to look at a public key infrastructure model that preserved the existing trust and business process models.
random refs:
https://www.garlic.com/~lynn/aadsover.htm
https://www.garlic.com/~lynn/draft-wheeler-ipki-aads-01.txt
https://www.garlic.com/~lynn/aadswp.htm
CAs and certificates were an design point to address trust in offline email where little or no trust processes existed ... which might show a net benefit. However, introducing a new trust participant in existing value transactions containing significant number of stake & trust members is a very, very difficult business process. If the argument is purely based on better authentication technology .... then do an implementation that is a technology public key infrastructure w/o having to introduce new trust partners into every transaction.
For a technology demonstration, a CA-base with certificates is a possibly less expensive demo scenerio i.e. a couple thousand certificates and some boundary demonstration of signed transaction which then are converted into standard business form and processed in the normal way. Somebody takes a risk acceptance on the fact that the limited demo isn't integrated into the standard trust and business processes. The trade-off is the integration of the public key infrastructure into the existing business & trust processes (1-5% of the existing data processing infrastructure costs) versis scaling a CA-base with certificates into a duplicate of the existing business & trust processes (100% of the existing data processing infrastructure costs) plus synchronizing the CA-base and the existing base for things like referential integrity (possibly another 100% of the base).
The cut-over from the CADS-model for demo purposes to integrated AADS-model would be at 1-5% of the account base plus any risk exposure because of a non-integrated operation.
Part of some of the AADS-model scenerios has been trying to demonstrate overall cost savings converting to public key infrastructure authentication from existing business authentication methods (i.e. various benefits are larger than the cut-over costs). Part of that is looking for existing transition activities and attempt to merge a AADS-model transition into some transition that is already going to be done and has been cost justified for other reasons.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Thu, 08 Mar 2001 16:37:36 GMTAnne & Lynn Wheeler writes:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Varian (was Re: UNIVAC - Help ??) Newsgroups: alt.folklore.computers Date: Thu, 08 Mar 2001 23:53:01 GMTEric Chomko writes:
Later, a friend of mine there (LSI) ported the bell "370" C compiler to CMS in the early '80s (as well as fixing/rewriting significant pieces) so he could then port various c-based chip tools (many out of berkeley) to CMS.
In the mid 80s ... they were using SGIs as graphics front-end with VM/370 on 3081 as the tools backend.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Fri, 09 Mar 2001 00:23:32 GMTAnne & Lynn Wheeler writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: How Many Mainframes Are Out There Newsgroups: bit.listserv.ibm-main Date: Fri, 09 Mar 2001 14:40:03 GMTHoward Brazee writes:
https://www.garlic.com/~lynn/2001b.html#56
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Fri, 09 Mar 2001 14:35:20 GMTnot-a-real-address@usa.net (those who know me have no need of my name) writes:
a generalized identity certificate represents a severe privacy issue. solution is a domain/business specific certificate carrying something like just an account number ... so not to unnecessarily divulge privacy information (even name). EU is saying that electronic payments at point-of-sale needs to be as annonymous as cash. By implication that means that payment cards need to remove even name from the card. A certificate infrastructure works somewhat with an online environment and just an account number.
a generalized 3rd party certificate represents a severe liability issue. solution is a domain/business specific relying-party-only certificate.
combine an "account" certificate and a relying-party-only certificate you have a CADS model that is the AADS model with redundant and superfluous certificates appended to every transactions, i.e. it is trivial to show that when a public key is registered with an institution and a copy of the relying-party-only certificate is returned (for privacy and liability reaons) where the original is kept by the registering institution; then returning the copy of the relying-party-only certificate to the institution appended to every transaction is redundant and superfluous because the institution already has the original of the relying-party-only certificate.
it is redundant and superfluous to send a copy of the relying-party-only certificate appended to every transaction to an institution that has the original of the relying-party-only certificate (i.e. the insitution that was both the RA and the CA for that specific relying-party-only certificate).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Fri, 09 Mar 2001 14:59:10 GMTnot-a-real-address@usa.net (those who know me have no need of my name) writes:
https://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2
is that for the CADS business solution to privacy and liability that the certificates are actually there .... but using some knowledge about the actual business flow of the transactions it is possible to compress the certificate size by eliminating fields that are already in possession of the relying party ... and specifically to show that all fields in the certificate are already present at the relying party and therefor it is psosible to deploy a very efficient CADS infrastructure using zero byte certificates.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Fri, 09 Mar 2001 17:46:59 GMTjunk@nowhere.com (Mark Currie) writes:
Currently if a person has 15 pieces of plastics in their waller, and their wallet is lost or stolen they have 15 places to notify. However, there are several 1-800 offerings that can be used to notify all 15.
If a person/business chose to have one or two hardware tokens registered in multiple places or 15 hardware tokens registered in multiple places ... there are still 15 places to notify.
The decision of what granularity of 1-to-1 or 1-to-many is business/personal decision (i.e. business may or may not allow various options, individuals may or may not to choose various options).
There is some serious issue whether CADS model scales up to large number of end-points ... either CRLs or OCSP.
The other issue is that significant numbers of businesses, commerical and financial entities are having difficulty with standard 3rd party identify certificate model because of privacy and liability reasons. For instance, the claim that EU is dictating that electronic transactions at retail point-of-sale will be as anonymous as cash; aka current payment cards have to remove name & other identification from plastic and magstripe ... as well as eliminate signature requirement.
The solution has been relying-party-only certificates. The relying-party-only aspects eliminates serious liability issues. A relying-party-only certificate carrying just an account number eliminates the serious privacy issues.
However, walking thru the process flows for relying-party-only certificate it is possible to show that appending a relying-party-only certificate (in the CADS-model) is redundant and superfluous.
The other approach it is possible to show that a relying-party-only certificate appended to every transaction can be compressed to zero bytes (i.e. every transaction has a CADS zero-byte certificate appended to every transactions).
misc. refs discussing redundant and superfluous appending of relying-party-only certificates and/or how relying-party-only certificates can be trivially compressed to zero bytes.
https://www.garlic.com/~lynn/99.html#238
https://www.garlic.com/~lynn/99.html#240
https://www.garlic.com/~lynn/2000.html#36
https://www.garlic.com/~lynn/2000b.html#53
https://www.garlic.com/~lynn/2000b.html#92
https://www.garlic.com/~lynn/2000e.html#40
https://www.garlic.com/~lynn/2000e.html#47
https://www.garlic.com/~lynn/2000f.html#15
https://www.garlic.com/~lynn/2000f.html#24
https://www.garlic.com/~lynn/2001.html#67
https://www.garlic.com/~lynn/2001c.html#56
https://www.garlic.com/~lynn/2001c.html#8
https://www.garlic.com/~lynn/2001c.html#9
https://www.garlic.com/~lynn/aadsm2.htm#account
https://www.garlic.com/~lynn/aadsm2.htm#inetpki
https://www.garlic.com/~lynn/aadsm2.htm#integrity
https://www.garlic.com/~lynn/aadsm2.htm#pkikrb
https://www.garlic.com/~lynn/aadsm2.htm#scale
https://www.garlic.com/~lynn/aadsm2.htm#stall
https://www.garlic.com/~lynn/aadsm3.htm#cstech6
https://www.garlic.com/~lynn/aadsm3.htm#kiss1
https://www.garlic.com/~lynn/aadsm3.htm#kiss2
https://www.garlic.com/~lynn/aadsm3.htm#kiss4
https://www.garlic.com/~lynn/aadsm3.htm#kiss5
https://www.garlic.com/~lynn/aadsm3.htm#kiss6
https://www.garlic.com/~lynn/aadsmail.htm#variations
https://www.garlic.com/~lynn/aepay3.htm#aadsrel1
https://www.garlic.com/~lynn/aepay4.htm#comcert10
https://www.garlic.com/~lynn/aepay4.htm#comcert11
https://www.garlic.com/~lynn/aepay4.htm#comcert12
https://www.garlic.com/~lynn/aepay4.htm#comcert15
https://www.garlic.com/~lynn/aepay4.htm#comcert2
https://www.garlic.com/~lynn/aepay4.htm#comcert3
https://www.garlic.com/~lynn/aepay4.htm#comcert9
https://www.garlic.com/~lynn/aepay4.htm#dnsinteg2
https://www.garlic.com/~lynn/aepay4.htm#x9flb12
https://www.garlic.com/~lynn/ansiepay.htm#simple
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Sat, 10 Mar 2001 00:07:36 GMTDaniel James writes:
fixing fraudulent unauthenticated transactions with X9.59 ... then possibly the next weakest link in the current infrastructure is the lost/stolen business process. This weakness also applies to calling up your local friendly CA and convincing them that your hardware token containing your private key has been lost/stolen.
this doesn't detract from x9.59 being able to eliminate the harvesting credit card exploit associated with using them for executing fraudulent unauthenticated transaction ... it just says that fraud will have to move someplace else.
at least the straightforward weakness in lost/stolen process is denial of service ... as opposed to fraudulently obtaining value thru fraudulent unauthenticated transactions. projected fraud costs from issues associated with lost/stolen report issues are significantly less than the current fraud costs associated with fraudulent unauthenticated transactions.
note the CRL issue and whether or not they are signed ... are independent of how can convince you local friendly CA that your hardware token has been lost/stolen.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Sun, 11 Mar 2001 21:50:49 GMT"Lyalc" writes:
there is a bunch of stuff in x9.84 (Biometric Information Management and Security) having to do with protecting a biometric shared-secret infrastructure. In a PIN-based shared-secret scenerio, when the PIN value is compromised, invalidate the old PIN and issue a new one. A problem in a biometric-based shared-secret scenerio, when a biometric value is compromised, it is difficult to issue new fingers, eyeballs, DNA, etc. at the current technology state-of-the-art.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Risk management vs security policy Newsgroups: comp.security.misc Date: Sun, 11 Mar 2001 21:54:59 GMTJon Haugsand writes:
references are to asset liability management.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SSL weaknesses Newsgroups: comp.security.misc Date: Mon, 12 Mar 2001 15:30:49 GMTAmy writes:
random refs:
https://www.garlic.com/~lynn/aepay4.htm ... whole merchant
comfort certificate thread
https://www.garlic.com/~lynn/aepay4.htm#comcert14 ... a root cert list
https://www.garlic.com/~lynn/2001c.html#8
https://www.garlic.com/~lynn/2001c.html#9
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SSL weaknesses Newsgroups: comp.security.misc Date: Mon, 12 Mar 2001 20:13:48 GMTp27604@email.mot.com (Doug Stell) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: confused about active scripting and security Newsgroups: alt.comp.virus,alt.security,comp.security.misc Date: Tue, 13 Mar 2001 00:41:44 GMTGary Flynn writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Key Recovery System/Product Newsgroups: sci.crypt Date: Wed, 14 Mar 2001 15:06:13 GMT"Arnold Shore" writes:
Generalizing that to multiple-access gets into either compromising the user's private key ... and/or having a duplicate repository. The duplicate repository is either early binding with the additional party's public key or uses more traditional late binding authorization/authentication (still could be done with public/private key authentication ... but goes thru some form of ACL list along with audit trail).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: KI-10 vs. IBM at Rutgers Newsgroups: alt.folklore.computers Date: Wed, 14 Mar 2001 18:15:53 GMTLars Poulsen writes:
when my wife and I did HA/CMP product ... we had our previous experience from having done mainframe clusters in the '70s. Also, when I was doing the initial distributed lock manager for HA/CMP ... from a couple of the top RDBMS vendors, we were provided with the ten top things done wrong by the VMS distributed lock manager and suggested that we needed to find a better way of doing it ... it is always easier to start from scratch building on previous experience.
random refs:
https://www.garlic.com/~lynn/94.html#15 cp disk story
https://www.garlic.com/~lynn/94.html#19 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#31 High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe)
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#58 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#182 Clustering systems
https://www.garlic.com/~lynn/99.html#183 Clustering systems
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000b.html#45 OSA-Express Gigabit Ethernet card planning
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#31 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What ever happened to WAIS? Newsgroups: alt.folklore.computers Date: Thu, 15 Mar 2001 15:46:54 GMTCharles Eicher writes:
national library of medicine & library of congress had a number of sigwais/z39.50/sigir meetings (nlm has full CD of just the words/terms they use for indexing).
wais inc, was looking for revenue streams ... and the search engine guys offering "free" indexing of the web probably closed some of those avenues.
following is from internet monthly report for march 1997 of activity at internic registration services
Gopher connections: 18222 retrievals: 20580 WAIS connections: 36532 retrievals: 15926 FTP connections: 90868 retrievals: 178451 Telnet 67372 Http 12855124
totally random ref:
https://www.garlic.com/~lynn/2000d.html#64
https://www.garlic.com/~lynn/94.html#26
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM Glossary Newsgroups: bit.listserv.ibm-main Date: Fri, 16 Mar 2001 16:05:11 GMTsknutson@LANDMARK.COM (Sam Knutson) writes:
there are payment, security, and financial glossaries; as well as index of ietf standards
https://www.garlic.com/~lynn/payment.htm
https://www.garlic.com/~lynn/secure.htm
https://www.garlic.com/~lynn/x9f.htm
https://www.garlic.com/~lynn/financial.htm
https://www.garlic.com/~lynn/rfcietff.htm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Wheeler and Wheeler Newsgroups: alt.folklore.computers Date: Fri, 16 Mar 2001 16:10:25 GMTPaul Repacholi writes:
anne was in gburg, was the catcher in jes group for asp turning into jes3
for awhile anne was in pok, responsible for loosely-coupled (aka cluster) architecture ... wrote Peer-Coupled Shared Data which became basis for IMS hot-standby and then parallel sysplex. also for awhile she had position as manager of 6000 engineering architecture. She also has a patent or two on token LANs.
ha/cmp is High Availability Cluster Multi-Processing ... product that we started in the late '80s when we were also working on fiber-channel cluster scale-up (before a lot of cluster work was really popular on "open" platforms.
we also ran high-speed backbone (among other things ran large rios chip design files between austin and the first large-scale hardware chip logic simulator on the west coast ... was about 50,000 faster than software logic simulators of the time).
random refs at
https://www.garlic.com/~lynn/
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Wheeler and Wheeler Newsgroups: alt.folklore.computers Date: Fri, 16 Mar 2001 16:35:06 GMT... and for the past several years we've done work on various kinds of systems, including financial systems ... random refs:
https://www.garlic.com/~lynn/aadsm5.htm#2
https://www.garlic.com/~lynn/aadsm5.htm#3
https://www.garlic.com/~lynn/aadsm5.htm#1
https://www.garlic.com/~lynn/aadsm5.htm#4
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Wheeler and Wheeler Newsgroups: alt.folklore.computers Date: Fri, 16 Mar 2001 16:50:49 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Fri, 16 Mar 2001 22:21:43 GMTnot-a-real-address@usa.net (those who know me have no need of my name) writes:
x9 web site is at
http://www.x9.org
TC68 is the iso/international financial standards organization (financial institutions). The US is the chair of TC68.
TC68 web site is at
http://www.tc68.org
there is also a tc68 page at the ISO home
http://www.iso.ch/meme/TC68.html
X9.59 recently passed (electronic standard for all account-based
retail payments):
https://www.garlic.com/~lynn/aepay6.htm#x959dstu
and is now on the ANSI electronic store web site:
https://web.archive.org/web/20020214081019/http://webstore.ansi.org/ansidocstore/product.asp?sku=DSTU+X9.59-2000
the additional addenda field to carry the addiitional X9.59 data is
already going forward as part of the standard five year review for
ISO8583. mapping of x9.59 to iso8583 (i.e. all payment cards) ref:
https://www.garlic.com/~lynn/8583flow.htm
X9 also passed an AADS NWI (i.e. new work item, the process by which
X9 financial standards work is done, for instance X9.59 was originally
a NWI).
https://www.garlic.com/~lynn/aepay3.htm#aadsnwi
The draft document for the AADS NWI was the AADS document in IETF
draft format (with some wrappers) ... but will be significantly redone
since the document formats for IETF and X9 are significnatly different
https://www.garlic.com/~lynn/draft-wheeler-ipki-aads-01.txt
many financial institutions have already gone to relying-party-only
certificates because of the signficant privacy & liability issues with
traditional TTP/3rd party identity certificates. It is relatively
trivial to show that for relying-party-only certificates appended to
account-based financial transactions that the certificates can be
compressed to zero-bytes.
https://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2
archives of the X9A10 working group mailing lists (the X9A10 retail
payment standards organization that produced X9.69)
http://lists.commerce.net/archives/ansi-epay/
https://web.archive.org/web/20020604031659/http://lists.commerce.net/archives/ansi-epay/
reference to NACHA accepting a similar AADS specification for ACH
https://www.garlic.com/~lynn/99.html#224
misc. other x9.59 & aads references
https://www.garlic.com/~lynn/
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PKI and Non-repudiation practicalities Newsgroups: sci.crypt Date: Fri, 16 Mar 2001 22:30:18 GMTnot-a-real-address@usa.net (those who know me have no need of my name) writes:
does the bank trust the card & does the bank trust who you are.
current system has the bank mailing you the card and you get to call up and answer one or more questions (sometimes just touch-tone & AR unit and sometimes it kicks out to real person).
Given the AADS chip strawman that a bank can trust w/o having to mail you something ... then it comes down are you really you. Simplest is a website analogy to the 1-800 activation, you show you have the private key by signing something that includes the public key, the web site asks you one or more questions (like off recent statements, from credit history, etc), the more questions that are answered the higher the confidence (credit limit); confidence increases over time with use and/or somebody walking into their local branch.
In this scenerio, it switches from bank choice about how many cards to individual choice about how many cards ... the person can choose to use as many or as few different cards for their relationship ... however lost/stolen pattern tends to be wallet/purse associated ... not single cards.
So the issue is the person has the same number of cards they have today, only one card ... or some number inbetween.
The risk/vulnerability analysis says that stealing cards is much less lucrative business since the cards don't work w/o PIN/secret key. Current credit cards can be used at point-of-sale w/o additional information or the account number used (w/o the card) in MOTO/internet (aka mail order/telephone order) transactions (i.e. X9.59 introduces authenticated transactions which eliminates most of the current account number havesting vulnerabilities).
Counterfeiting is a lot harder also ... i.e. one of the new major credit card frauds is waiter with a PDA & magstripe stripe reader inside their jacket, as they do their normal business, they are also harvesting the magstripe information with the PDA. Six hours later, there are counterfeit cards in use someplace else in the world (i.e. the actual card doesn't have to be lost/stolen, just the information from the magstripe). The chip, private key, & secret key are significantly more difficult to counterfeit.
So now the issue is what, if anything, new has to be done for lost/stolen reports. Just stealing the card doesn't do much good w/o the PIN. For most practical fraud purposes, extracting the private key from the chip costs more than any likely benefit. Most of the existing attacks go away since the costs rise significantly compared to any likely gain. With all the major existing easy fraud issues addressed, the weakest link is possibly the lost/stolen infrastructure ... either as a denial-of-service attack (and forcing the infrastructure to deal with person in less secure way) and/or as part of getting a fraudulent replacement authorized. At the worst, it is the same as it is today. Improvement, is using some of the web server techniques associated with variable confidence techniques based on the number of different random questions that can be answered, aka rather than absolute lost/stolen report ... have a variable confidence lost/stolen report.
The other issue in lost/stolen report ... is loosing your wallet/purse and your one (or more) authentication devices ... which may be collectively registered for 20 different relying parties ... can the notification be simplified. Current 1-800 global notification is a cost, somebody has to be prepared to pay for it. The success of any 1-800 service is going to be individuals willing to fund it (directly or indirectly) or preferring to perform the function themselves (again an individual choice). The individual could automate the process personally with a PDA device (i.e. the centralized 1-800 is somewhat left-over from the days where only corporate infrastructures had data processing capability). The downside is that the PDA may be in the same wallet/person package as the card(s) ... and also lost/stolen.
A 1-800 and/or automated PDA possibly is a vulnerability point since the information there could be used to compromise the infrastructure also (also. a bank confirming with variable, random questions from recent statements/transactions isn't likely to get the information from a 1-800 service since there needs to have some other type of trust relationship).
The are a number of ways of enhancing the lost/stolen reports (and replacements) both from aspect of ease-of-use as well as vulnerabilities. The fixes also introduce costs and/or different kinds of vulnerabilities. I would like to see a dozen or more different approaches with well researched cost/vulnerability trade-offs and allow the market to decide on how many of them are acceptable (similar to allow person to decide on how many different authentication hardware tokens they may wish to posses and how they distribute their trust relationships among the hardware tokens).
However, if the frequency of (at least the stolen part of) lost/stolen drops, the aggregate costs for lost/stolen infrastructure declines, even with more sophisticated infrastructure introducing various forms of variable confidence techniques.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch Date: Sat, 17 Mar 2001 13:57:26 GMTdsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
random refs:
https://www.garlic.com/~lynn/2000e.html#39
https://www.garlic.com/~lynn/2000.html#38
some number of ISP NNTP servers seem to run on ragged edge (although I I don't know whether it is disk, cpu, or memory) ... I notice significantly slow-down at some times of the day (initial connection, even w/article handshaking).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CNN reports... Newsgroups: us.military.army,alt.folklore.military,sci.military.naval, uk.people.ex-forces,soc.culture.kuwait Date: Sat, 17 Mar 2001 14:30:48 GMTJohn Lansford writes:
An interested exercise might be to order a list of items as to their perceived lethality (is that a word?) against a ranking of causes of mortality accidents. Smoking seems to be way up there (although i don't know if death by smoking would be considered an accident or not). More people have likely died this year from smoking than in all military related accidents.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unix hard links Newsgroups: comp.arch,comp.arch.storage Date: Sun, 18 Mar 2001 14:57:05 GMT"Maxim S. Shatskih" writes:
Later a similar trick was done for the CMS EDF (aka extended) file system. Anotheer trick was adding a flag indicating sorted FSTs in the file directory hyperblocks ... which significantly cut down file search overhead for very large number of FSTs.
One of the big differences between the CMS CDF & CMS EDF file systems was that the CDF file system alwas wrote the first record of the MFD at record 4. The CMS EDF file system changed that to alternate writing the first record of the MFD between record 4 and record 5 (with a version number). This covered the case of a transient write error (potentially corrupted the record on disk) at the same time as a system &/or power failure. At restart, both records 4 & 5 were read and the valid record with the latest version number was used.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CNN reports... Newsgroups: alt.folklore.military Date: Sun, 18 Mar 2001 15:20:02 GMTBobMac writes:
w/o knowing the rates/# people, there is some story that traffic accidents involving drivers using cellphones is rapidly closing on traffic accidents involving alcohol (i.e. are cellphones killing more people than guns?).
Even with military service tending to involve more dangerous activities ... i haven't seen any statistics that indicate that there are more fatal accidents per ??? than say DWI or DWcellphone (i.e automobiles seeem to be very dangerous things).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unix hard links Newsgroups: comp.arch,comp.arch.storage Date: Mon, 19 Mar 2001 01:22:21 GMT"Bill Todd" writes:
a file system could potentially eliminate the requirement for replicated NVRAM ... by having a list of uncommitted records and on power restoration know enuf to read the individual record blocks in a RAID-5 set (w/o parity and hope for the best), replace just the specific record and rewrite the whole RAID-5 set with new parity.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Q: ANSI X9.68 certificate format standard Newsgroups: sci.crypt Date: Mon, 19 Mar 2001 15:54:22 GMTTomás Perlines Hormann writes:
... or since the relying party built and kept the original certificate at publickey registration time and transmitted a copy of the certificate to the public key owner; to then have the publickey owner return a copy of the original certificate appended to every transaction sent to the relying-party ... when the relying party has the original certificate is redundant and superfluous.
the nominal objective of x9.68 compatc/compressed certificate was to operate in a highly optimized account-based financial transaction environment that typically might involve existing transaction sizes of 80 bytes or less. The addition to such transactions of both a digital signature and a 4k-12k byte publickey certificate would represent significant bloat in the size of the financial transactions
random refs:
https://www.garlic.com/~lynn/aadsm5.htm#x959
https://www.garlic.com/~lynn/2001c.html#72
http://www.x9.org/
from old x9.68 draft introduction (ISO 15782-1 is the work of ISO
TC68/SC2 international financial standards body):
This standard defines syntax for a more compact certificate than that
defined in ISO 15782-1 and X.509. This syntax is appropriate for use
in environments with constraints imposed by mobility and/or limited
bandwidth (e.g., wireless communications with personal digital
assistants), high volumes of transactions (e.g., Internet commerce),
or limited storage capacity (e.g., smart cards). This syntax is also
geared towards use in account-based systems.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unix hard links Newsgroups: comp.arch,comp.arch.storage Date: Mon, 19 Mar 2001 16:01:30 GMT"Stephen Fuld" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unix hard links Newsgroups: comp.arch,comp.arch.storage Date: Mon, 19 Mar 2001 16:35:52 GMT"Bill Todd" writes:
At that point the stripe and the parity record may be inconsistent. Normal filesystem commit is only with respect to the record being written and not with respect to the other records in a raid-5 stripe and/or the parity record.
A "commit" filesystem supporting roll-foward with RAID-5 sensitivity might be able to eliminate the need for the NVRAM for covering the case of parity & stripe inconsistency, the filesystem log of all pending writes could be used to imply all possibly parity records that might possibly inconsistent with their associated stripes ... and have the filesystem only rebuild the parity record for those specific stripes (rather than all parity records).
On recovery, the filesystem could do a full stripe read for each pending write that it has from its log, update just the specific record and rewrite the full stripe ... forcing parity record consistency
then there is the problem that any power failure may have resulted in bad/aborted i/o write ... and then there is case of the two writes just happened to occur simultaneously and the power failure resulted in both writes (parity and record) being bad (i.e. aborted write leaves the records being written with i/o error indication). A raid-5 sensitive filesystem could still read the remaining records in the stripe (w/o parity), update the pending record write from the log and rewrite the full stripe (with new parity).
Of course, similar strategy could be implemented in a controller with NVRAM (keeping track of which operations are in progress and attempting to mask situations where partially complete operations could have occurred on disk and there may be no on-disk error indication of the incompleteness/inconsistency).
random refs
https://www.garlic.com/~lynn/2001b.html#3
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ARP timeout? Newsgroups: comp.security.firewalls Date: Mon, 19 Mar 2001 18:22:22 GMT"Anon" writes:
normal internet nominal has at least two levels of network indirection.
host name ... stuff of the form www.abc.com
which the domain name infrastructure will take and resolve to an internet address ... of the form 1.1.1.1;
the internet address normally still needs to be resolved into a real "network" address ... like the address of an ethernet card.
ARP, address resolution protocol handles the resolution of internet address to network address. most implementations have local ARP caches that are kept at each machine ... giving the mapping between an internet address and a network address. This cache has time-out values for the entries in the cache ... after which the ARP protocol has to go out and resolve the address again.
In addition, DHCP/BOOTP service can implement a reverse ARP ... or mapping between a network address to a internet address. Some firewalls have DHCP/BOOTP implementation ... where machines on the local LAN receive their IP address dynamically from the firewall.
PPP dialup and some types of DSL service also use DHCP to assign internet address ... i.e. when the initial connection is made, an IP address is dynamically assigned to the connecting machine by the ISP using DHCP. For DSL service doing dynamic internet address assignment, it is possible that it is configured so that the address assignment periodically times-out and then an internet address needs to be re-acquired/acquired.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CNN reports... Newsgroups: alt.folklore.military Date: Mon, 19 Mar 2001 21:02:21 GMTBobMac writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch Date: Tue, 20 Mar 2001 16:41:17 GMTpg+nm@sabi.Clara.co.UK (Piercarlo Grandi) writes:
the inverted tables allowed a segment value to be an "id" rather than a pointer to a page table. However, the mechanics of the segment value were orthogonal to whether it inverted tables were used or not.
ROMP supported a 12 bit segment id value. The top four bits of 32-bit address were used to index a segment register. The TLB was indexed using the 12bit segment id from the segment register plus the 28-12=16bit virtual page number.
In a non-inverted architecture ... a corresponding solution is the segment id is an address pointing to a page table. Rather than directly indexing the TLB with concatenation of the page table address plus the virtual address, there can be some sort of page table address associative table ... where the TLB is indexed by a concatenation of the page table address entry index and the virtual address. Say there is 16 entry address table ... the segment is looked up in the address table and its 4bit index is then taken and concatenated to the virtual address for the TLB lookup. If the segment is not currently in the associative array, a entry is selected for invalidation & replacement and all the TLB entries with that index are flushed.
Another solution is that there is a two level table, a segment table of page table addresses (i.e. segment ids) and then the page tables. Rather than having the virtual addresses in the TLB associated with a specific segment, they are associated with a specific addess space (and the TLB entries are tagged as to the address space they are associated with). In this situation, "shared segments" may have duplicate entries in the TLB ... i.e. same virtual address in the same shared segment but in different address space can have unique entry per address space.
Of course the simplest solution is not to tag the TLB entries at all, just every time there is an address space switch, flush all the TLB entries.
Anyway, back to ROMP/801. The TLB entries effectively have 16bit virtual page no. plus tagged with the 12bit segment id. Changing values in segment register had no (direct) effect on the TLB since the TLB segment tag field is large as the possible TLB max. value. The original ROMP/801 design compensated for the small number of simultaneous segments in a virtual address space with the assumption that inline code could change the segment id values in the segment registers as easily as changing address values in address registers (i.e. ROMP/801 was supposedly viewed as having a 40bit virtual address space based on the implied ability for inline code to change segment id values as easily as base registers ... i.e. 12bit segment id + 28bit address ... gives 40bits of addressing).
For RIOS/801 (i.e. power), the segment id value was doubled from 12 bits to 24 bits. Now each TLB entry had a 24bit segment id tag plus a 16bit virtual page number. RIOS/801 theoretically had a 52bit virtual address space (24bits segment id plus 28bit virtual address = 52bits) under the implied assumption that inline code could change segment ids in segment registers as easily as addresses could be changed in address registers.
However, mapping ROMP/RIOS/801 to open system like unix which did late permission/security binding basically reduced the infrastructure back to a standard 32bit virtual address space (4bit segment index plus 28bit virtual address) since the operating system permission paradigm was completely different than the original ROMP/RIOS/801 design point. Rather than doing early permission/security validation and allowing inline code to have access to complete infrastructure, permissions and security checking was done with system calls at runtime.
The RIOS/power infrastructure sill used 24bit segment ID and the TLB still supporting tagging of the entry with 24bit segment id plus 16bit virtual page number ... the unix "virtual address" space programming paradigm assumed relatively long-term binding of segments to addresses with system calls and typically some amount of planning allowing segments in an address space to change (different than the programming, permissions, and security paradigm that the hardware was designed for).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unix hard links Newsgroups: comp.arch,comp.arch.storage Date: Tue, 20 Mar 2001 17:14:15 GMTJan Vorbrueggen writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A future supercomputer Newsgroups: sci.crypt Date: Tue, 20 Mar 2001 23:47:07 GMTQuisquater writes:
random other URLs/refs:
https://www.garlic.com/~lynn/2000d.html#2
https://www.garlic.com/~lynn/2000d.html#3
https://www.garlic.com/~lynn/95.html#13
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "Bootstrap" Newsgroups: alt.folklore.computers Date: Wed, 21 Mar 2001 15:19:38 GMTjcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
it was possible to generate a program in self-loading format ... i.e. so that it didn't need the loader in front of it.
and of course there was the 360 3card loader.
the 360 microcode "IPL" (initial program load) hardware sequence would load/read 24 bytes starting at location zero. The first eight bytes were assumed to be a PSW (program status word) and the 2nd/3rd CCWs were assumed to be I/O program instructins. After the 24 bytes were read into location zero, the hardware would have an I/O branch instruction to location 8 (i.e. the 2nd 8 bytes, presumably more i/o program instructions). After the I/O program terminated, the hardware would load the PSW from location zero. The IPL sequence would read 24 bytes with SILI (suppress incorrect length indication) so regardless of the actual record length, only the first 24 bytes were read and the rest of any data in the record was ignored (i.e. wasn't possible to do a one card loader in 360 since the remaining bytes in the card were ignored).
The 360 PSW contained bunch of status bits and the instruction counter ... i.e. where to start executing. The assumption was that the 2nd/3rd CCWs for i/o program would read instructions into some storage location and then the PSW would cause a branch to that location. There also more complex loading sequences ... where the 2nd CCW read additional CCWS and the 3rd CCW branched/TIC to the additiional I/O program CCWs. The additional I/O program CCWs would then presumably read instructions ... and eventually at some time the I/O program would finish and the LOAD hardware would pickup the PSW at location zero.
The 3card loader had 24bytes of binary/hex in the first card ... for the initial hardware load sequence ... and then the next two cards contained 160 bytes of instructions ... which would read additional cards from the device and do what ever was necessary.
random refs:
https://www.garlic.com/~lynn/94.html#11
https://www.garlic.com/~lynn/98.html#9
https://www.garlic.com/~lynn/99.html#135
https://www.garlic.com/~lynn/2001b.html#23
https://www.garlic.com/~lynn/2001b.html#26
https://www.garlic.com/~lynn/2001b.html#27
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unix hard links Newsgroups: comp.arch,comp.arch.storage Date: Wed, 21 Mar 2001 15:46:57 GMTPaul Repacholi writes:
It was possible to change the flavor of almost anything by creating an executable with the same name as a kernel call or by creating a script with the same name as an executable.
The original CMS text formater was called SCRIPT (i.e. precursor to GML, SGML, HTML, etc). It was possible to have both a private copy of the binary executable with the filename SCRIPT as well as a script file with the filename SCRIPT. Typing the command SCRIPT would pickup the script file ... and the script file could force invoking the executable SCRIPT file. Other tricks could be played, with a small hack, it was also possible to call kernel routines expecting binary arguments from a script file. Basically kernel calls, executables, script files, etc, were presented as a uniform, consistent interface regardless of how invoke.
For VM/370, CMS was enhanced to support a kernel call mode that used effectively an index into the kernel name table (rather than an 8 character name).
However, prior reference about enhancement to sort file status table ... was in part motivated because of the extensive use of command lookup that occurred in CMS. For a sorted file status table ... a binary /radix search could be performed rather than a linear search. Since the command search sequence not only could occur with manually entered commands ... but even with kernel calls ... it was a very frequently executed function.
random ref:
https://www.garlic.com/~lynn/2001c.html#76
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: database (or b-tree) page sizes Newsgroups: comp.arch Date: Wed, 21 Mar 2001 16:06:50 GMTanton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Basically the executable was page mapped .... and quite a bit of it was prefetched ... starting with the initial executable address. Depending on configuration and contention for real storage ... all or part or some of the executable would be prefetched in a single I/O operation, all of it could be prefetched in chunks with multiple i/o operations, some of it could be prefetched and other parts demand paged, etc.
This had the advantage of shortening the initial program start latency by getting the initial part of the executable into memory and executing first ... while not necessarily having to wait until the whole executable was in storage (and at the same time not having to incur the latency of demand paging individual pages).
On large multi-user systems ... the improvement was significant ... but the difference was on the order of the difference between using the system on a lightly loaded system versus using the system when under heavy load (significant program start latency because of contention).
However, it also made a big difference in mapping VM/370 to XT/370. For XT/370 there was a 370 co-processor card in a PC and VM/370 was modified to do I/O via an inter-processor call to DOS running on the 8088. For disk I/O, DOS would emulate the VM/370 disk i/o with files on the XT (100ms/access) hard disk. Using the standard executable loading process migration from standard 370 with standard mainframe harddisks to XT/370 (and 100ms/access hard disks) represented a significant increase in program start-up time. The hybrid stuff with the paged mapped filesystem helped to mask the program startup latency (in much the same way the technique is used to mask some cache miss latency).
random refs:
https://www.garlic.com/~lynn/96.html#19
https://www.garlic.com/~lynn/96.html#23
https://www.garlic.com/~lynn/2000.html#5
https://www.garlic.com/~lynn/2000.html#29
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/2001c.html#76
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/