Ellis, B. (2004), "Internet commentary", Circuit World, Vol. 30 No. 3. https://doi.org/10.1108/cw.2004.21730cag.001
Emerald Group Publishing Limited
Copyright © 2004, Emerald Group Publishing Limited
Certainly this is a duty, not a sin.Cleanliness is, indeed, next to godliness
Before I start my commentary proper, please allow me to digress on to something which is completely off-topic for this journal. I have just finished reading a very remarkable book. It is called A Short History of Nearly Everything, by Bill Bryson (ISBN 0-385-40818-8). It gives a remarkable summary of everything that has happened between the Big Bang and man exterminating his fellow creatures. It is written with the author's usual humour and gives an insight, often very uncomplimentary, into the scientists who discovered this or that phenomenon. Even more remarkable for a popular science book, it is fully referenced. For a mere £10 at amazon.co.uk, it is an extremely good read for anyone.
I have received an e-mail from Charles Liu in Singapore, asking me for more details on Internet protocol (IP) numbers and their meaning. This is an enormous subject on which many books have been written, along with transmission control protocol (TCP). It is therefore impossible to give a full explanation in a short paper. In the first place, IP means a lot more than just the numbers. You will no doubt have seen, when sending a large attachment across the Internet, the file size is often remarkably larger than you anticipate. This is because the attachment is broken up into packets of a convenient size, and each packet or "datagram" has its own header containing an explanation according to the IP and the TCP. Each packet may be theoretically up to 64kB long, but many systems limit them to a maximum of 576bytes (512 bytes of useful data plus the 512 bit maximum header).
The IP header is rather like an envelope with an address and a stamp on it. When you post it, you will never know whether it reaches the person to whom it is addressed, unless he replies to you. In fact, the IP header contains enough information to allow the packet to be sent to its destination under normal circumstances, to tell the receiving computer how long it is, the type of packet, whether it is part of a series of packets forming a larger file and, if so, which part it is. There are also a number of other items which need not concern us here.
The destination address and the source address are identified by 32 bit binary numbers. For convenience, each "IP number" is usually split up into four sections and each eight bit number is converted to a decimal. This may take the form, such as: 220.127.116.11 or 1101100111100001011100000001000. Theoretically, this will give a total of over 4 billion unique addresses. In practice, for many reasons, the number is very much smaller and we are running out of unique values. It is probable that the IP number will become a 64 bit value within a year or two, but the principle will probably remain the same.
So, how can we be sure that each computer connected to the Internet has a unique address? Well, it is quite complex. If you send me an e-mail at firstname.lastname@example.org, the essential part is after the @; protonique.com is an internationally registered domain which has a unique IP number allocated to it. The local server to which you send the message will consult a special server which will look up protonique.com and find a corresponding IP number; this server is part of a worldwide distributed database called the Domain Name System (DNS). The message will therefore (hopefully) arrive at a server which will read the b_ellis part and send it to the right mailbox on my Internet service provider's mail server.
The IP also has means to limit the number of hops between servers it takes to deliver a message. The reason for this is obvious; if messages which cannot be delivered are allowed to hang out, trying all sorts of abstruse routing, indefinitely, then the Internet will become completely and irremediably blocked within seconds. The number of hops is therefore usually limited to 64.
However, there is a big problem. If you send me an e-mail, with just the IP, and one packet or datagram goes missing, then it becomes meaningless. In fact, you would never know whether I received your e-mail and I would never know that you had sent it. This is where TCP comes in; TCP is layered on top of the IP. The TCP part of the header actually checks that the packet or datagram received corresponds to what was sent. If this check is validated, the receipt of the packet is transmitted back to the sender in a coded form. If the sender's computer receives and checks, in turn, that everything is correct, then it knows that it can send the next packets (actually, it usually sends up to three or four packets simultaneously). If something goes awry and the sender's computer cannot see a receipt of the check within a second or two, then it re-sends the appropriate packet again and again until a receipt can be acknowledged. This is why the Internet is essentially a very reliable means of communication and the technical term for this two-way handshaking is a socket interface.
The TCP also ensures that the number of bytes in a packet or datagram is optimised for the available connection. This can actually vary during the transfer of a message or a Web page by mutual accord between the computers at each end, so that if communications are difficult or slow then the packet size will be reduced. It also adds a sequence number to the packets, so that if they arrive out of order, they can be reassembled correctly.
Embedded within the TCP/IP protocols are many other protocols. Some of these are used to identify the type of communications, so that the servers know how to deal with them. The five most important ones, which are layered over TCP, are the following
HyperText Transfer Protocol or HTTP.This is used to define the format of a web page to be transmitted to the browser and to prefix a Uniform Recourse Locator (URL) or Web site address, e.g. http://www.protonique.com, to take an example of my old company's Web site home page
Secure HyperText Transfer Protocol or HTTPS.This is the same as HTTP, except that any information typed into fields or displayed is encrypted, so that casual "crackers" cannot elicit confidential information (note that nothing is guaranteed as 100 per cent secure and anything you type in can be monitored by third parties if they have managed to install spyware or a Trojan Horse on your system)
Simple Mail Transfer Protocol or SMTP.This defines a format for e-mail messages and their means of transfer from the sender to the receiver.
File Transfer Protocol or FTP.This is usually what is used when you download a file over the Net. Initially, this required an ad hoc software, but most modern browsers can handle this protocol, as well as HTTP(S) types, albeit with less flexibility and speed. I still recommend the use of these utilities, such as WS_FTP, if you have regular need to send files to or receive files from FTP sites. FTP is also often used to upload web pages to their servers, although some Web site editing applications can also do it via special web serving software in HTTP format, provided the server has the ability to accept some special extensions.
Network News Transfer Protocol or NNTP.This defines the format of UseNet style Newsgroups.
I hope this brief explanation will help our Singaporean reader to understand the very basics of what is happening when he sends or receives an e- mail or browses a web page. I make no apology for having summarised a 600 page text book on IPs into half-a-page of this journal. If you want to know more, there is an entry-level TCP/IP for Dummies book or various other books, such as The TCP/IP Bible (a hefty tome) or the slightly lighter Sams Teach Yourself TCP/IP in 24Hours (a very optimistic title, by the way!). Useful information also abounds on the Internet and http://www.bitpipe.com/data/rlist?t=987097377_16660325&sort_by=status&src=google or http://networking.ittoolbox.com/nav/t.asp?t=444&p=454 &h1=444&h2=454 may be a couple of useful starting places, considering you will obtain over 3 million responses if you type TCP/IP into Google!
As many readers know, when I was director of the now-extinct Protonique Group of companies, I pioneered many aspects of ionic contamination testing, sometimes erroneously referred to as cleanliness testing. I developed the first computerised instrument of its type, as early as 1978, several years before any of our competitors. A lot of water has passed under the bridge since then. In 1991, I ceded the Contaminometer marque and range of instruments to the Multicore Group, which passed into the hands of Henkel/Loctite two or 3 years back. They have ceded the Multicore SPCID products, including the Contaminometers, to Concoat Limited, because they did not belong to their core business. I therefore believe that I am well qualified to comment on Web sites devoted to this subject, even though I have not been active in the field for 6 years, when my last development contract with Multicore expired. I have no vested interest in the subject, now.
The Zero Ion tester was a latecomer in the field, originally sold by the London Chemical Company (Lonco) of Illinois. It belongs to the so-called "dynamic" type of instrument, along with the Ionograph and two of the old Contaminometer models (see later), which could be used in either "dynamic" or "static" modes. Don't be misled by these terms, which are more commercial than technical: the difference is that "dynamic" means that the solution regeneration is done as the test proceeds and that the integration of the measured contamination is done electronically, whereas "static" means that the regeneration is done after the test has finished and the integration is automatic by the accumulation of the dissolved contamination in the measuring circuit. It has nothing to do with either the speed at which the test proceeds (in fact, the "dynamic" tests usually take longer than the "static" ones!) or the accuracy of the results. This page is a bare description of the instrument, along with some claims and terminology that may be somewhat misleading or out of date. For example, it equates the need for testing with ISO 9000. Don't get me wrong, this does not mean that the instrument is unsuitable for what it is meant to do; it is just that the information given on this single page is a little on the weak side. There is also a downloadable PDF file, but this does not give any more information than the web page.
One of the lesser-known instruments, the ICOM (originally ICOH-M), was, unless I am much mistaken, originally developed by Fry's Metals or, at least, for this company. It is the only instrument that uses a spray-in-air, as opposed to a spray- under-surface technique; as such, it is the one which is most prone to carbon dioxide absorption errors. These are reduced by using a starting resistivity as low as 20 MΩ (most others use at least 100 MΩ), reducing the sensitivity somewhat. This technique offers two advantages: it allows better penetration of solution under the components and the integration time can be much shorter. I suppose that this is a case of "horses for courses" and one has to weigh the advantages against the disadvantages. The page in question is strong on features and weak on technical details.
This PDF file is essentially the sales brochure for the Ionograph and the Omegameter, produced by Speedline Technologies' Specialty Coating Systems (SCS). This company is a member of the Cookson Electronics Group, as is Alpha Metals and Alpha Instruments, the former name of the manufacturers. Of course, the Omegameter was originally made by Kenco Inc., later absorbed into Alpha Metals. One can wonder why SCS have two separate lines of ionic contamination testers, but this is easily explained as the Ionograph is a "dynamic" instrument and the Omegameter a "static" one. This six-page brochure (including also an SIR tester irrelevant to this review) is complete and factual, to the point one would accept in a sales document. There is nothing controversial in it, even about the use of ionic contamination testing for "no-clean" applications. Not really topical to this commentary, I was rather surprised not to see that the two names were either Trade Marks or Registered Trade Marks, especially as Omegameter is a name that has been subjected to a bitter Class 9 (meaning basically scientific and industrial instrumentation) trade mark dispute between Omega Engineering Inc. and Omega SA, regarding period timers.
http:// www.speedlinetechnologies.com/ DOCS/Publications/APPLICATION– A%20Basic %20Disc%20of% 20Ionic%20Cont%20Testing%2012- 02.pdf
This is another document from SCS which purports to outline the basics of ionic contamination testing. On the whole, it is not bad, but it is written essentially to promote the SCS instruments, under the guise of a technical document. Its weakness is that, without specifically naming them, it tends to denigrate features available in competitive instruments and methods, revealing its bias. Naturally, it promotes the advantages offered by their own instruments. Interesting, but should be read with a pinch of equivalent sodium chloride.
The last contender is the Contaminometer, manufactured by Concoat Systems. This PDF file is, in fact, a single page that gives almost no information. The two instruments mentioned bear no resemblance to those which were made by Protonique and it would appear that they have been built down to a price, rather than built-up to as close to a technical perfection as possible, which was the leitmotif in my days. I know, though, they have retained some of the old features, including the highly sensitive and accurate measuring system.
This Web site belongs to Foresite Inc., earlier known as Contamination Studies Laboratories, a.k.a. CSL, the leading US consultants and laboratory on all matters related to contamination testing on electronics assemblies. It is well worth browsing through all their pages, because there is a wealth of information relating to ionic contamination testing available (Plate 1). The great majority of this is technically correct, although some more controversial subjects are treated, where there is no cut and dried "correct" answer, only opinion. In most of these points, I agree with the opinion published on this site. I cannot help it; I must quote one sentence, out of the FAQs on fluxes, "The road to a true non-rosin, no-clean manufacturing process is a hazardous one, lined with ruffians and all manner of evils." How eloquently put! This is a five-star site for technical content.
Plate 1 The Foresite Home Page. This is the most informative site for contamination-related data, so much so that one could almost call it the Foresite Saga (sorry, Mr Galsworthy!)
http://www.xs4all.nl/~tersted/ PDF_files/CleaningPapers/ qualifying_a_new_manufacturi ng_process.pdf
A few years old, this is a short paper by Doug Pauls, late of CSL, explaining how to qualify a new cleaning process, in conjunction with a given flux chemistry. Despite its age, it is as apposite today as the day that Doug put pen to paper or, more likely, finger to keyboard. It explains clearly how ionic contamination testing can be used as a means of industrial process control, after the qualification of the process.
This is a peculiar page, written in the style of an aide-memoire about everything to do with cleaning and contamination control, without reaching any definitive conclusions. Regarding ionic contamination testing, I could not help but smile at "method 2.3.25 (beaker method) – (used only by fools)". Actually, there is worse that is not mentioned, the unique method that is specified in the original 1972 MIL-P-28809, in which I counted 22 unspecified variables, capable of introducing cumulative errors of about +200/ 275 per cent! It is unfortunate that all the methods we use currently are derived from that, albeit with many of the sources of errors removed or reduced. It is like calibrating a micrometer with a wooden foot-rule!
Well, what else did I find in "Googling" through the subject? Hundreds of companies stating they have this or that ionic contamination tester, tens of resellers offering testing equipment, tens of used-equipment merchants and auctioneers offering second-hand instruments (often almost prehistoric: I found one instrument for sale that I had made in 1979 – with a projected lifetime of 5 years!) and little else.
Note 1 John Wesley, Sermons on Several Occasions, Sermon 88, (1788).