Paul Clayton

image
unemployed

Biography has not been added

Paul A. Clayton

's contributions
Articles
Comments
Discussions
    • Moving to simpler cores will tend to run into the parallelism wall. In addition, more cores will mean more interconnect overhead--not as much overhead as the aggressive execution engines in a great big OoO core, but non-zero overhead. (Some multicore chips do have private L2 caches [sharing in L3]. Also, sharing would mainly add a small arbitration delay. With replicated tags, the arbitration delay in accessing data could be overlapped with tag checking. With preferential placement, a local partition of a shared L2 could typically supply data; less timing critical [e.g., prefetch-friendly] data could be placed in a more remote partition.) It should also be noted that memory bandwidth is also a problem. Pin (ball) count has not been scaling as rapidly as transistor count. (Tighter integration can help, but such can also make thermal issues harder to deal with.) While "necessity is the mother of invention", the problems are very difficult (arguably increasingly difficult--earlier, software-defined parallelism was not necessary for a single processor chip to achieve good performance, now, SIMD and multithreading are increasingly important). Even the cost of following the actual Moore's Law (doubling transistor count in a "manufacturable" chip) has been increasing, so economic limits might slow growth in compute performance. In any case, this can be viewed as an opportunity for clever design.

    • Expecting training would also tend to weed out the sub-mediocre employees (either by making them better or by recognizing that they are not willing to learn) which tends to improve productivity and morale. Longer employee retention while excluding those unwilling to learn would tend to increase productivity and morale by increasing cultural coherence and trust. Knowing who to ask, when to ask, and how to ask are important factors in communication (but cannot be covered in a pamphlet or even an organizational wiki). Trust also substantially reduces the need for communication and improves morale (reducing stress, increasing feelings of being valued, and increasing feelings of community).

    • The only way "intrinsically-serial code" can be converted into parallel code is with algorithmic changes--such a drastic rewrite is generally not considered converting. There is much more hope for converting *implicitly* serial code into explicitly parallel code, at least for limited degrees of parallelism. However, for personal computers, it seems doubtful that the benefit of programming for greater parallelism would justify the cost when most programs have "good enough" performance (or are limited by memory or I/O performance). If most code is not aggressively optimized for single-threaded operation, why would the programmers seek to optimize the code for parallel operation? By the way, "Number 4: ARM's BIG.little heterogeneous cores" should, of course, be "Number 4: ARM's big.LITTLE heterogeneous cores"; ARM seems to be emphasizing the smaller core.

    • Several compilers provide extensions supporting '0b' notation for binary literals (e.g., gcc http://gcc.gnu.org/onlinedocs/gcc/Binary-constants.html). C++11 also supports user-defined literals using suffixes (so one could define "_B" as such a suffix and have the compiler run--at compile time--a procedure which translates such strings into numbers). This stackoverflow question seems to be a good source of information on this topic: http://stackoverflow.com/questions/2611764/can-i-use-a-binary-literal-in-c-or-c

    • I think moving from PDF to XML could be good. If the industry could standardize on a DTD (or collection of DTDs--or more information-rich schemas), it might become practical to develop tools that extract and manipulate the information. Even with just a DTD, CSS could be used to provide context-specific presentation of the data in the XML document with almost any web browser. (Collaboration seems unlikely, though.) PDF is a print-oriented format. XML would be particularly appropriate for data-rich, non-narrative documents like datasheets which have significant amounts of commonality in the type of information presented.

    • Just providing the source code (under a license that allows modification but restricts use to those already licensed--while allowing a third party to make modifications) would be enough to help avoid some issues. (Open source would be better for the customer, of course.) Unfortunately, a company is unlikely to provide source code (and required documentation) even after the product is no longer supported because such might reveal trade secrets or reduce demand for new products. Revealing poorly written source code and documentation could also hurt a company's reputation, and preparing such for public release would add costs for a product that is no longer generating revenue.

    • Amen! C++ is intended to be a multi-paradigm programming language; it is almost a superset of C.

    • The fatal defect rate per KLOC would presumably increase with program size (in the common case of intercommunication). If two communicating modules each have a 5% chance of having a bug and a 0.05% chance of an internal catastrophic bug, there is presumably a non-zero chance that a bug that is not internally catastrophic in one module with interact catastrophically with a like bug in the other module. While increasing the number of users tends to exercise more potential paths to failure, increasing the diversity of uses can even more broadly exercise the system. I also wonder how the metric of defect potential would apply for agile development which might generate more total bugs even though bugs are discovered more quickly (unless the defect count is taken at the first candidate for open release).

    • While I agree that there are tradeoffs among quality, cost, and time to market, I am not certain that your Patriot Missile and Therac examples are good examples. I received the impression that in the Therac case, there was an overconfidence in the quality of the code and a simple, relatively inexpensive failsafe would have avoided the problem. The Patriot Missile system may also have increased the damage from a failure by targeting the ground and (I suspect) should have been more quickly fixed. (I do not remember there being that many Scud launches that were intercepted, though I suspect you are correct that delivering a faulty system early was better than nothing. The psychological advantage was also significant; doing something--even if ineffective--can help morale.) One problem, however, is that tolerance of defects can corrupt the development culture. Being a good manager (considering all these and the many other tradeoffs) is not easy.

    • [continuing] One of the strengths of C++ is that it is multiparadigm. One can write C-style code in C++. (C++ also provides better support for implementing features in libraries where C would have to extend the base language itself--e.g., complex numbers. This significantly facilitates extension of the language. The more broadly useful a tool is the more diversely it will be tested--finding bugs or misfeatures--and the richer its auxiliary facilities will become.) Even C is not as controllable as assembly, but the writing speed and maintainability advantages alone often justify the use of C. (Portability is also a major factor--not only in allowing a given code to be used on different ISAs but also in allowing more programmers to be familiar with C.) C++ sacrifices further control (while--like C supporting inline assembly--still supporting coding at a lower level) for similar benefits. I am not a programmer, but the benefits of a multiparadigm language seem obvious.