Wednesday, August 13, 2014

Parallel Javascript

ParallelJS started its life as River Trail, a JavaScript language extension for Firefox by Intel. River Trail provides a new constructor, ParallelArray with iteration methods whose callbacks are executed concurrently.
ParallelJS brings the RiverTrail idea to normal arrays and has recently been included in Firefox Nightly. It adds three methods to arrays (binary data will eventually get those methods, too):

Array.prototype.mapPar()
Array.prototype.filterPar()
Array.prototype.reducePar()
These methods work like the traditional array methods map(), filter() and reduce(), but the order in which the elements are processed is undefined (with the non-par versions, you know exactly in which order they are processed). That enables concurrent execution of multiple instances of the callback. However, not all callbacks can be run concurrently. A callback must fulfill certain requirements:
It must not mutate shared data. It can freely change data that it has created, but can only read data from the outside world.
Many host (or “native”) objects can’t be used. That includes the DOM, which may never be supported and regular expressions for which short- to mid-term support is likely. All vanilla JavaScript objects are fine, though.
PJS code is executed like this: It is first run sequentially and observed. The requirements for parallelism are checked. If they are fulfilled, parallel code is produced and run in parallel. If any of the requirements are not fulfilled anymore, then PJS reverts to sequential execution.
The rules for when PJS code can be run concurrently will probably never be standardized, because a PJS standard needs to accomodate a variety of implementation approaches (see below). There will, however, be guidelines and recommendations for what works for most implementations.

Given how difficult it is to predict whether your code can be run concurrently or not, an important part of PJS will be diagnostic tools. They will let you find out when and why code is not parallelizable.

ParallelJS is currently tentatively scheduled for ECMAScript 8. If you are worried that that’s too far into the future: ECMAScript 7 and ECMAScript 8 will be smaller and developed more quickly than ECMAScript 6. 

http://www.2ality.com/2013/12/paralleljs.html

Also checkout Webcl for the work there on Parallel Computing in HTML 5 Browsers.
https://www.khronos.org/webcl/

Tuesday, July 15, 2014

Reactive Manifesto

1. Event Driven via Asynchronous Communication
In an event-driven application, the components interact with each other through the production and consumption of events—discrete pieces of information describing facts. These events are sent and received in an asynchronous and non-blocking fashion. Event-driven systems tend to rely on push rather than pull or poll, i.e. they push data towards consumers when it is available instead of wasting resources by having the consumers continually ask for or wait on the data.

Event-driven systems enable loose coupling between components and subsystems. This level of indirection is, as we will see, one of the prerequisites for scalability and resilience. By removing complex and strong dependencies between components, event-driven applications can be extended with minimal impact on the existing application.

2.  Scalable via Location transparency
The topology of the application becomes a deployment decision which is expressed through configuration and/or adaptive runtime algorithms responding to application usage.

3. Resilient via Fine Grained Error Handling
Application downtime is one of the most damaging issues that can occur to a business. The usual implication is that operations simply stop, leaving a hole in the revenue stream. In the long term it can also lead to unhappy customers and a poor reputation, which will hurt the business more severely. It is surprising that application resilience is a requirement that is largely ignored or retrofitted using ad-hoc techniques. This often means that it is addressed at the wrong level of granularity using tools that are too coarse-grained. A common technique uses application server clustering to recover from runtime and server failures. Unfortunately, server failover is extremely costly and also dangerous—potentially leading to cascading failures taking down a whole cluster. The reason is that this is the wrong level of granularity for failure management which should instead be addressed using fine-grained resilience on the component level.

In a reactive application, resilience is not an afterthought but part of the design from the beginning. Making failure a first class construct in the programming model provides the means to react to and manage it, which leads to applications that are highly tolerant to failure by being able to heal and repair themselves at runtime. Traditional fault handling cannot achieve this because it is defensive in the small and too aggressive in the large—you either handle exceptions right where and when they happen or you initiate a failover of the whole application instance.

Key Building Blocks

In order to manage failure we need a way to isolate it so it doesn’t spread to other healthy components, and to observe it so it can be managed from a safe point outside of the failed context. One pattern that comes to mind is the bulkhead pattern, illustrated by the picture, in which a system is built up from safe compartments so that if one of them fails the other ones are not affected. This prevents the classic problem of cascading failures and allows the management of problems in isolation.

The event-driven model, which enables scalability, also has the necessary primitives to realize this model of failure management. The loose coupling in an event-driven model provides fully isolated components in which failures can be captured together with their context, encapsulated as messages, and sent off to other components that can inspect the error and decide how to respond.

4. Responsive via observable models, event streams, and stateful clients
Observable models enable other systems to receive events when state changes. This can provide a real-time connection between users and systems. For example, when multiple users work concurrently on the same model, changes can be reactively synchronized bi-directionally between them, thus appearing as if the model is shared without the constraints of locking.

Event streams form the basic abstraction on which this connection is built. Keeping them reactive means avoiding blocking and instead allowing asynchronous and non-blocking transformations and communication.

Reactive applications embrace the order of algorithms by employing design patterns and tests to ensure a response event is returned in O(1) or at least O(log n) time regardless of load. The scaling factor can include but is not limited to customers, sessions, products and deals.

They employ a number of strategies to keep response latency consistent regardless of load profile:

Under bursty traffic conditions reactive applications amortize the cost of expensive operations—such as IO and concurrent data exchange—by applying batching combined with an understanding and consideration of the underlying resources to keep latency consistent.
Queues are bounded with appropriate back pressure applied, queue lengths for given response constraints are determined by employing Little’s Law.
Systems are monitored with appropriate capacity planning in place.
Failures are isolated with alternate processing strategies readily available for when circuit breakers are triggered.

Friday, May 2, 2014

Improved Type Inference in C++11: auto, decltype, and the new function declaration syntax

int x = 3;
decltype(x) y = x; // same thing as auto y = x;
You can use decltype for pretty much any expression, including to express the type for a return value from a method. Hmm, that sounds like a familiar problem doesn't it? What if we could write:
decltype( builder.makeObject() )
This would give us the type that is returned from makeObject, allowing us to specify the return value from makeAndProcessObject. We can combine this with the new return value syntax to produce this method:
template <typename Builder>
auto
makeAndProcessObject (const Builder& builder) -> decltype( builder.makeObject() )
{
    auto val = builder.makeObject();
    // do stuff with val
    return val;
}

As of this writing, GCC 4.4 and MSVC 10 both support everything I've talked about this article, and I'm used most of these techniques in production code; these aren't theoretical benefits, they're real. So if you're compiling your code with -std=c++0x in GCC or using MSVC 10, you can start using these techniques today. If you're using another compiler, check out this page for details of C++11 compiler support. Since the standard has been ratified, and should be published within weeks, now's the time to start.

http://www.cprogramming.com/c++11/c++11-auto-decltype-return-value-after-function.html