2017 week 3 in programming

Trello Is Being Acquired By Atlassian

We wanted Trello to be fluid and adaptable to a huge range of problems that all revolved around getting people on the same page. We’ve been humbled and excited to see the millions of use cases for Trello around the world, and we build the product each day proudly knowing we’re helping teams do great things. As part of Atlassian, Trello will be able to leverage investments in R&D that will enhance the product in meaningful ways. We are certain that Atlassian understands the unique and novel reasons why Trello is so successful and well-loved. In short: you can expect Trello to become even more awesome and more fun than it is today. Thank you to all the Trello users out there who have helped make Trello what it is today. Your passion, feedback, and love for Trello inspire all of us to continue to make Trello more useful and delightful.

Things you probably didn’t know you could do with Chrome’s Developer Console

You can again go ahead and select a particular element among them by specifying the position of that element in the array. ‘.className’) will give you all the elements that have the class className, and $$('.className’) and $$('.className’) will give you the first and the second element respectively. Find Events Associated with an Element in the DOMWhile debugging, you must be interested in finding the event listeners bound to an element in the DOM. The developer console makes it easier to find these. Monitor EventsIf you want to monitor the events bound to a particular element in the DOM while they are executed, you can this in the console as well. MonitorEvents($(‘selector’ will monitor all the events associated with the element with your selector, then log them in the console as soon as they’re fired. Inspect($(‘selector’ will inspect the element that matches the selector and take you to the Elements tab in the Chrome Developer Tools. List the Properties of an ElementIf you want to list all the properties of an element, you can do that directly from the Console.

Debugging mechanism in Intel CPUs allows seizing control via USB port

Researchers from Positive Technologies have revealed that some new Intel CPUs contain a debugging interface, accessible via USB 3.0 ports, that can be used to obtain full control over a system and perform attacks that are undetectable by current security tools. The JTAG debugging interface, now accessible via USB, has the potential to enable dangerous and virtually undetectable attacks. On older Intel CPUs, accessing JTAG required connecting a special device to a debugging port on the motherboard. Starting with the Skylake processor family in 2015, Intel introduced the Direct Connect Interface which provides access to the JTAG debugging interface via common USB 3.0 ports. Goryachy and Ermolov speculated that this mechanism in Intel CPUs could lead to a whole new class of Bad USB-like attacks, but at a deeper and even more dangerous level than their predecessor. SC asked Goryachy if he would compare this vulnerability to Stuxnet, to which he said: “This mechanism can be used on a hacked system regardless of the OS installed. Stuxnet was infecting only Windows machines, meanwhile the DCI can be used on any system with Intel U-series processor. This series is used on laptops and NUC. As of today, no publicly available security system will detect it.” Goryachy told SC, “We have reported this case to Intel. As of today, this mechanism can be exploited only on Intel U-series processors.”

Simple and Terrifying Encryption Story

ProblemI wanted to build an app where users can encrypt and decrypt messages. AES seemed to be a reasonable choice for such a symmetric encryption, so my first step was to find a proper AES library. SolutionI was programming in Ruby, so I did what every Rubyist would do - I googled “Ruby gem aes”. I wrote a test for decoding messages with wrong keys. To be more specific, I replaced the key first char with some other chars. As a result it is possible to decrypt messages with almost any key. The problem is that it’s the first Google search result for “Aes gem” or “Ruby aes gem” query and we don’t question top Google results very often.

Learn OpenGL, extensive tutorial resource for learning Modern OpenGL

None

Does anyone else have trouble finishing their side-programming projects?

If you’re always starting interesting projects and not finishing, then no matter how hard you work, you’re just busy, not productive. In a lot of ways they are not a waste - I definitely learn a lot from these projects and gain a new skill. The same applies to the projects we work on: even if you dedicate many hours to a task and it’s 80% of the way there, if you never finish then no-one will care. Now I have written out a list of every one of my projects, and broken them down into steps I can attack piece-by-piece, and I focus on making sure that every day I cross-off at least one item, so I’m getting closer to moving a project from my ‘to-do’ list, and into my ‘finished’ list. What is the minimal state of completion this project needs to reach for me to consider it a success and having been worth my time? If I cannot realistically commit to the amount of time required to bring the project to that state, am I better off putting my energy into finishing projects that I am currently working on? Taking on a project that you do not have the time to finish is going to burn more bridges than telling people you don’t have the time to contribute. I have a list of projects that every day I am whittling down, and I intend to follow up this post at the end of the year with a list of my projects that are finally complete.

StackOverflow | A Post-Mortem on the Recent Developer Story Information Leak

The information wasn’t actually printed to browsers, but was present in the page’s HTML source markup. Discovery of the information was possible only through very specific searches containing the user’s email address or phone number. Our second priority was to get in touch with major search engines in order to get the accidentally disclosed information out of their indexes. Personally-identifiable information is something that every developer needs to handle with care. It’s extremely important, if not critical, to know when you’re working with something that in any way transmits personally-identifiable information in any way. Identification of PII in the code base and database, so developers immediately know if the code they’re working with stores or transmits PII and precisely the kind of information that needs to be considered. We take our responsibility as custodians of your information and trust very seriously; now that we’ve taken every possible measure to mitigate any potential inconvenience to those affected, we feel that we owe it to you to be as transparent about what happened as possible.

Chris Lattner Joins Tesla

We would like to welcome Chris Lattner, who will join Tesla as our Vice President of Autopilot Software. Chris’s reputation for engineering excellence is well known. He comes to Tesla after 11 years at Apple where he was primarily responsible for creating Swift, the programming language for building apps on Apple platforms and one of the fastest growing languages for doing so on Linux. Prior to Apple, Chris was lead author of the LLVM Compiler Infrastructure, an open source umbrella project that is widely used in commercial products and academic research today. As Chris joins Tesla, we would like to give a special thanks to Jinnah Hosein, SpaceX’s Vice President of Software, who has been serving a dual role as the interim Vice President of Tesla Autopilot Software and will now be heading back to SpaceX full-time. We would like to thank Jinnah for the efforts needed to achieve excellence in both roles, David Nister, our Vice President of Autopilot Vision, and the team for their exceptional work in advancing Autopilot. We are very excited that Chris is joining Tesla to lead our Autopilot engineering team and accelerate the future of autonomous driving.

Disassembling Jak & Daxter (which has one of the best Game Engine ever created)

The only part of the game not written in GOAL is the loader/linker, which is written in C++. This is the equivalent of their DLL loader; it’s just a simple stub program to load the rest. So where does the loader get its data from? First we need to examine the file formats used here. It’s not really fair to call this “Hotloading”, it’s really just “Loading”, as updating the game with new code/data is no different than loading it in the first place. These are loaded in-place by the loader, which makes loading very efficient. What we’re doing is loading the address of a function and storing it at an offset into the global symbol table. So that’s pretty much all there is to loading compiled GO files. You can disassemble the ELF loader to figure out the exact file format, and what with having the ELF symbols for the loader available, it’s not that hard to replicate the function of the loader ourselves.

Deeply typed programming languages (response to The Dark Path by Uncle Bob)

Some months ago he wrote Type Wars, a post to defend static type systems are not really needed if you do TDD. Now he’s back. In his latest post, The Dark Path he brings new weapons for the dynamic languages enthusiasts to tell everyone else all these static type checks are useless. Uncle Bob is right saying there are multiple risks using many features of the programming languages. That’s the way we have to evaluate whether a programming language feature is appropriate or not. Obviously, if I’m just in the playground with a new language as Uncle Bob was with Swift and Kotlin, I’m quite far to experiment any benefit from the depth of the type system. In my experience, many programmers coming from dynamic languages see no benefit on static typing because of that: in the hundred lines they wrote following the language getting start guide, they saw no benefit from the type system. In summary, I’m really proud of walking on the dark path of deeply typed programming languages.

Chris Lattner leaves Apple

Update on the Swift Project Lead Chris Lattner clattner at apple.com Tue Jan 10 11:07:09 CST 2017 Since Apple launched Swift at WWDC 2014, the Swift team has worked closely with our developer community. When we made Swift open source and launched Swift.org we put a lot of effort into defining a strong community structure. This structure has enabled Apple and the amazingly vibrant Swift community to work together to evolve Swift into a powerful, mature language powering software used by hundreds of millions of people. I’m happy to announce that Ted Kremenek will be taking over for me as “Project Lead” for the Swift project, managing the administrative and leadership responsibility for Swift.org. I plan to remain an active member of the Swift Core Team, as well as a contributor to the swift-evolution mailing list. Working with many phenomenal teams at Apple to launch Swift has been a unique life experience. Swift is in great shape today, and Swift 4 will be a really strong release with Ted as the Project Lead. Note that this isn’t a change to the structure - just to who sits in which role - so we don’t expect it to impact day-to-day operations in the Swift Core Team in any significant way.

Fast Haskell: Competing with C at parsing XML

In this post we’re going to look at parsing XML in Haskell, how it compares with an efficient C parser, and steps you can take in Haskell to build a fast library from the ground up. In the very limited benchmarks I’ve done it is typically just over 2x faster at parsing than Pugixml, where Pugixml is the gold standard for fast XML DOM parsers. C is really fast right? Like 100s of times faster than Haskell! It’s worth the risk. Using the Criterion benchmarking package, we can compare Hexml against the pretty old Haskell xml package. Case Bytes GCs Check 4kb parse 26,096 0 OK 42kb parse 65,696 0 OK 52kb parse 102,128 0 OK Benchmark xeno-memory-bench: FINISH Benchmark xeno-speed-bench: RUNNING… benchmarking 4KB/hexml time 6.225 μs benchmarking 4KB/xeno time 10.34 μs. The first thing that should jump out at you is the allocations. Case Bytes GCs Check 4kb parse 1,160 0 OK 42kb parse 1,160 0 OK 52kb parse 1,472 0 OK Benchmark xeno-memory-bench: FINISH Benchmark xeno-speed-bench: RUNNING… benchmarking 4KB/hexml time 6.190 μs benchmarking 4KB/xeno time 4.215 μs. Down to 4.215 μs. That’s not as fast as our pre-name-parsing 2.691 μs. But we had to pay something for the extra operations per tag. Benchmarking 1MB/hexml-dom time 1.225 ms 1.000 R² mean 1.239 ms std dev 25.23 μs benchmarking 1MB/xeno-sax time 1.206 ms 1.000 R² mean 1.213 ms std dev 14.58 μs benchmarking 1MB/xeno-dom time 2.768 ms 1.000 R² mean 2.801 ms std dev 41.10 μs. Tada! We matched Hexml, in pure Haskell, using safe accessor functions.

A beginner’s Guide to the many different ways to JOIN tables in SQL

SELECT * FROM generate series( ‘2017-01-01’::TIMESTAMP, ‘2017-01-01’::TIMESTAMP + INTERVAL ‘1 month -1 day’, INTERVAL ‘1 day’ ) AS days(day) - You can always JOIN. ON true - to turn an syntactic INNER JOIN into a semantic CROSS JOIN JOIN departments AS d ON true - … and then turn the CROSS JOIN back into an INNER JOIN - by putting the JOIN predicate in the WHERE clause: WHERE day >= d.created at. EQUI JOIN. Sometimes, e.g. in literature, you will hear the term EQUI JOIN where “EQUI” isn’t really meant as a SQL keyword, but just as a specific way of writing a special kind of INNER JOIN. In fact, it is weird that “EQUI” JOIN is the special case, because it’s what we’re doing most in SQL, also in OLTP applications where we simply JOIN by primary key / foreign key relationship. Convenient syntax: SELECT * FROM a LEFT JOIN b ON - Cumbersome, equivalent syntax: SELECT a., b. FROM a JOIN b ON - LEFT JOIN part UNION ALL SELECT a., NULL, NULL, …, NULL FROM a WHERE NOT EXISTS - RIGHT JOIN part UNION ALL SELECT NULL, NULL, …, NULL, b. FROM b WHERE NOT EXISTS. Alternative syntaxes: “EQUI” OUTER JOIN. The above examples again used the “Cartesian product with filter” kind of JOIN. Much more common is the “EQUI” OUTER JOIN approach, where we join on a primary key / foreign key relationship. SELECT * FROM actor FULL JOIN film actor USING FULL JOIN film USING. And of course, this also works with NATURAL LEFT JOIN, NATURAL RIGHT JOIN, NATURAL FULL JOIN, but again, these aren’t useful at all, as we’d be joining again USING, which makes no sense at all. SEMI JOIN. In relational algebra, there is a notion of a semi join operation, which unfortunately doesn’t have a syntax representation in SQL. If it did, the syntax would probably be LEFT SEMI JOIN and RIGHT SEMI JOIN, just like the Cloudera Impala syntax extension offers. SELECT a.first name, a.last name, f.* FROM actor AS a LEFT OUTER JOIN LATERAL AS revenue FROM film AS f JOIN film actor AS fa USING JOIN inventory AS i USING JOIN rental AS r USING JOIN payment AS p USING WHERE fa. This was A Probably Incomplete, Comprehensive Guide to the Many Different Ways to JOIN Tables in SQL. I hope you’ve found 1-2 new tricks in this article.

Let’s Stop Ascribing Meaning to Code Points

One very common misconception I’ve seen is that code points have cross-language intrinsic meaning. Folks start implying that code points mean something, and that O(1) indexing or slicing at code point boundaries is a useful operation. UTF8 encodes 7-bit code points as a single byte, 11-bit code points as two bytes, 16-bit code points as 3 bytes, and 21-bit code points as four bytes. UTF-32 encodes all code points as 4-byte code units. The flag emoji “🇺🇸” is also made of two code points, + 🇸. One false assumption that’s often made is that code points are a single column wide. While the treatment of code points in editing contexts is not consistent, it seems like applications consistently do not consider code points as “Editing units”. Of course, APIs that work with code points are exposed too, you can iterate over the code points using.

Celebrating Telegram 1.0 by using it as a network tunnel

None

Nim 0.16 released

The new Nimble release that is included with Nim 0.16.0 includes a variety of new features and bug fixes. Import compiler/ast, compiler/parser, compiler/lexer import compiler /. The two are equivalent, but the new latter syntax is less redundant. Library AdditionsAdded new parameter to error proc of macro module to provide better error message Added new deques module intended to replace queues. Language AdditionsThe emit pragma now takes a list of Nim expressions instead of a single string literal. Sh error: unknown processor: aarch64” Fixed “RFC: asyncdispatch. Poll behaviour” Fixed “Can’t access enum members through alias” Fixed “Type, declared in generic proc body, leads to incorrect codegen” Fixed “Compiler SIGSEGV when mixing method and proc” Fixed “Compile-time SIGSEGV when declaring. Importcpp method with return value " Fixed “Variable declaration incorrectly parsed” Fixed “Invalid C code when naming a object member “Linux” Fixed “[Windows] MinGW within Nim install is missing libraries” Fixed “Async: annoying warning for future.finished” Fixed “New import syntax doesn’t work?” Fixed “Fixes #1994” Fixed “Can’t tell return value of programs with staticExec” Fixed “StartProcess() on Windows with poInteractive: Second call fails”.

The J source code is pretty terrible

Verbs: Primes and Factoring / #include “j.h” #define MM 25000L / interval size to look for primes / #define PMAX 105097564L / upper limit of p: ; = p: PMAX / #define PT 500000L / interval size in ptt / static A p4792=0; / p: i.4792 / static I ptt[]= ; / p: PT1+i. 210 / static I ptn=sizeof(ptt)/SZI; static I jtsup(J jt,I n,Iwv) / <.

Optimization Chrome by adding const (and, on VC++, by deleting it)

Most executable formats have at least two data segments - one for read/write globals and one for read-only globals. If you have constant data, such as kBrotliDictionary, then it is best to have it in the read-only data segment, which is segment ‘2’ in Chrome’s Windows binaries. Putting data in the read-only data segment has a few advantages. When my ShowGlobals tool showed that blink::serializedCharacterData was in the read/write data segment, and when investigation showed that it was never modified, I landed a change to add a const modifier, and it dutifully moved to the read-only data segment. With VC++ if you have a class/struct with a const member variable then any global objects of that type end up in the read/write data segment. For most of my changes the effect was just to move some data from the read/write data segment to the read-only data segment, as expected, but two of the changes did much more. Most importantly, the various globals involved in these two changes go from being mostly or completely per-process private data to being shared data, saving an estimated 200 KB of data per process.

Chrome Command Line API Reference - A more complete list of things you probably didn’t know you could do with Chrome’s Developer Console

The Command Line API contains a collection of convenience functions for performing common tasks: selecting and inspecting DOM elements, displaying data in readable format, stopping and starting the profiler, and monitoring DOM events. 0 - $4. The $0, $1, $2, $3 and $4 commands work as a historical reference to the last five DOM elements inspected within the Elements panel or the last five JavaScript heap objects selected in the Profiles panel. Selector) returns the reference to the first DOM element with the specified CSS selector. Dirxml(object) prints an XML representation of the specified object, as seen in the Elements tab. GetEventListeners(object) returns the event listeners registered on the specified object. Log object data with table formatting by passing in a data object in with optional column headings. UnmonitorEvents(object[, events]) stops monitoring events for the specified object and events.

Big list of interesting open source projects for contributing on such languages as C, C++, Golang, JavaScript, Python etc.

A.W.E.S.O.M. O: The really big list of really interesting open source projects. If you are interested in Open Source and are considering to join the community of Open Source developers, it is possible that in this list you will find the project that will suit you. Wanna support us? Just share this list with your friends on Twitter, Facebook, Medium or somewhere else. To the extent possible under law, the person who associated CC0 with awesomo has waived all copyright and related or neighboring rights to awesomo. You should have received a copy of the CC0 legalcode along with this work. If not, see https://creativecommons.org/publicdomain/zero/1.

Uncle Bob argues for languages that let you shoot yourself in the foot

These languages are both a far cry from a truly functional programming language; but every step in that direction is a good step. My problem is that both languages have doubled down on strong static typing. In the case of Swift, the parent language is the bizarre typeless hybrid of C and Smalltalk called Objective-C; so perhaps the emphasis on typing is understandable. The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job. The rules of the language insist that when you use a nullable variable, you must first check that variable for null. To become an expert in these languages, you must become a language lawyer. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages.

Senior Engineers Reduce Risk

New users introduce technical risk but new hires introduce risk into the company’s processes and culture. The senior engineers may reduce risk deliberately, or simply have the right skill set at the right time. Senior engineers are storytellersMost risks, especially where managed effectively, never come to pass. If a senior engineer identifies a significant risk, they have to be able to concisely explain and prioritize it for a non-expert audience. Senior engineers choose companies with the right risksEvery company has different risks, and so every company expects something different from their senior engineers. Senior engineers know titles don’t mean muchWithout context, knowing someone was a “Senior engineer” tells you almost nothing. Titles can matter a lot in some environments, but in those cases titles are just another tool for reducing risk; if being a senior, or staff, or principal engineer is the only way to make your voice heard, then pursuing those titles is worthwhile.

Announcing Tokio 0.1

Today we are publishing the preliminary version of the Tokio stack, 0.1! Tokio is a platform for writing fast networking code in Rust. You can use the Tokio stack to handle a wide range of protocols, including streaming and multiplexed protocols, as well as more specialized servers like proxies. Tokio is primarily intended as a foundation for other libraries, in particular for high performance protocol implementations. Over time, we expect Tokio to grow a rich middleware ecosystem and ultimately to support various web and application frameworks. Hyper, for example, has been adding Tokio integration, and there’s a growing list of other protocol implementations as well. In general, we are eager to support the growing Tokio ecosystem.

dgsh — directed graph shell – I feel like I have been waiting for something like this my whole life!

Process the git history, and create two PNG diagrams depicting committer activity over time. The most active committers appear at the center vertical of the diagram. Demonstrates image processing, mixining of synchronous and asynchronous processing in a scatter block, and the use of an dgsh-compliant join command.

How removing caching improved mobile performance by 25%

A colleague and I were looking at how our flagship product, Klarna Checkout, loads in the browser and contemplating ways to improve the performance. You tell the browser what your app needs to be offline via a manifest, the browser downloads everything in the manifest, and the next time a user hits your page, the browser will first load the cached content before checking over the network if the manifest file has changed. It wasn’t actually necessary since for our use case standard HTTP caching techniques already got us what we wanted - cached assets being loaded without going to the network. I still had questions about exactly how and when these files were downloaded and how the downloads affected potentially concurrent requests for the same assets from the HTML itself. On my machine, the application cache removal improved response time by half while the fullscreen deferral improved it by another 15%. w00t! When we put the changes into production, they had a dramatic effect on Chrome Mobile especially, cutting load times by 25%. The other browsers also had an improvement, but it was more modest. In his case, he sometimes had the awful performance I saw so I’m going to guess that it had something to do with my machine being newer and thus being able to get farther along in the load process before the ‘ready’ event came in.

comments powered by Disqus