react

We can develop some pretty elegant React Component APIs using functions as children in React Components. Take for example a Dropdown component. If we want it to be flexible by leaving the DOM structure up to the user, we would need some way to designate toggler elements of the Dropdown. One way is with the data-toggle attribute:

However, this will require manually setting up event listeners on real DOM nodes, for example, the componentDidMount method on this component might look something like this:

A more elegant solution is to expose the component’s toggle method to its children by using a function as a child:

This way, we’re using React’s event system instead of raw DOM events, and we wouldn’t need to implement componentDidMount at all.

When toggle is called, the opened css class will be toggled on the div element. In other words, the component would generate DOM that looks like this:

and when the toggle function is called, the opened class is added to the element:

The implementation of this Dropdown component looks like this:

this.props.children is the function child of the Dropdown component, and it is called with the instance’s toggle method. This returns a React Element, the div, which we clone to add the dropdown and opened css classes.

A discussion of this pattern and other real world use cases can be found here.

This article was written by Kenneth Chung.

Djangoapp2

When we start prototyping our first web application with Django, we always tend to create one Django app and put all the models into that app.  The reason is simple – there are not that many models and the business logic is simple.  But with the growth of the business, more models and business logic get added–one day we might find our application in an awkward position: it’s harder and harder to locate a bug and takes longer and longer to add new features, even if they are simple.  In this blog we’ll talk about how to use different Django apps to reorganize models and business logic so that it scales with the growth of our business.  We will also illustrate the flow of the change with a simple case study.

Prototyping stage – a simple case study

We start from a simple application called “Weblog” which allows the users to create and publish blogs. We create an app called weblog. And the models are as follows.

In weblog/models.py:

 Now assume the rest of the application is completed based on the models above. The users can now login, create and publish their blogs using our application.

Evolving approach I – keep adding new business logic into the same app

Say we have a new requirement. In order to attract more authors to create content using our application, we’ll pay the authors based on view counts. The price is $10 for every 1000 views. And the payout is sent once a month.

Since the new requirement sounds pretty simple, a regular approach that puts the new models and logic into the existing app is good enough. First, we add a new model in the “weblog” app like below:

In weblog/models.py:

For each new view of a blog, we increase the count based on the month or create a new MonthlyViewCount record if this is the first view in this month. The code looks like:

At the end of each month, we run a cron task which aggregates all the view counts for each author and sends the payment to them accordingly. Here’s the pseudo code:

In weblog/tasks.py:

The approach above seems fine for handling a simple change request like this. But there will always be new requirements coming in.

Evolving approach II – organize the business logics into different Django apps

Now we have a new requirement. To encourage the authors to create content in certain categories, the business team wants to adjust the award strategy into a category-based one. Each category will have a different award price. Say the award price table looks like following:

Price (per 1000 views)
Tech $15
Sports $10
Fashion $5

This new requirement also looks simple enough that we can just create a new model in the existing app to store the category base price and update the cron task to look for the category-based price when aggregating the total. The whole change will take less than 30 minutes and everything is good to go.

But there are two major problems with cranking more and more new models and business logic into the main ‘weblog’ app.

  1. The main app becomes responsible for business logic of different domain knowledge. The class files become bigger and unmaintainable.
  2. The agility is compromised since it is harder to to debug an issue and adding new features is slower.

In Django, we can use different apps to organize the business logic of different domains and Signals to handle the communication between apps. In our example, we’ll try to move all the billing-related models and methods into the a new app called ‘billing’.

First we move all the billing-related models into the new billing app.

In billing/models.py:

Now for each new view of any blog article, the billing app needs to be informed so that it can record them accordingly. To do so, we can define a signal in the “weblog” app and create a single handler in the “billing” app to process the signal received.

We move the Blog.increase_view_count()into billing/receivers.py as a signal handler:

Then a new signal is created in weblog/signals.py:

And we also need to inject a signal-sending snippet in one of view methods in weblog/views.py:

Finally we can move the billing-related cron task send_viewcount_payment_to_authors from weblog/tasks.py to billing/tasks.py and add new logic to handle the new category-based pricing.

Although compared with the regular approach, which simply puts everything new into the main app, the approach above needs more code changes and refactoring, it does have several merits that make it worthwhile.

  1. The business logic from a specific domain is segregated from the other domains, which makes the code base easier to maintain.
  2. If an issue occurs during the runtime, the cause can be promptly located in the scope of an app based on the symptom. This shortens the debugging time.
  3. When a new developer onboards, they can start working on a single app first, which will moderate the learning curve.
  4. If we decide to deprecate the whole set of business logics in a specific domain (e.g. all the billing features are no longer needed), we can simply remove that app and everything else should continue to run normally.

Conclusion

A lot of startups are using Django to prototype their product or service. Additionally, Django can handle the growth of their business pretty well.   An important aspect is to rethink and reorganize the business logic into different apps from time to time and keep the responsibility of each app as simple as possible.

This article was written by Irwen Song.

ShipFlow Rescale

Date: Wednesday, March 2nd, 2016
Time: 8:00AM PDT / 11:00AM EDT / 5:00PM CST
Duration: 60 minutes

Presenters:
Leif Broberg, Managing Director, FLOWTECH
Sarah Dietz, Business Development Manager, Rescale
Hiraku Nakamura, Application Engineer, Rescale
Magnus Östberg, Project Manager CFD, FLOWTECH

In this webinar we will demonstrate how SHIPFLOW users can benefit from using Rescale’s on-demand, HPC cloud platform.

CFD is used extensively for the hydrodynamic design of ships e.g. to evaluate resistance, delivered power, added resistance, or even for automatic hull shape optimization.  However, limited local computer resources and deadline constraints can limit the CFD scope for these projects.  Whether working with a few large computations or many smaller computations in an optimization, access to additional resources in the cloud can be the solution.

Rescale and FLOWTECH will demonstrate different approaches to running SHIPFLOW with Rescale’s on-demand, HPC solution.

1.  Introduction to Rescale and SHIPFLOW
2.  How to run SHIPFLOW directly on the Rescale platform
3.  Running an optimization or a parameter study locally using Rescale remotely

We look forward to you joining us for this free webinar on March 2nd.

Register

This article was written by Rescale.

ryanblog3

It has been about a year and a half since we released a reusable Azure Cloud Service for provisioning a simple Windows MS-MPI cluster without having to install HPC Pack. Azure has undergone a lot of changes since that time and we thought it would be worth revisiting this topic to see what the current landscape looks like for running Windows MPI applications in the cloud.

First, Cloud Services and the so-called “IaaS v1 Virtual Machines” have been relegated to “Classic” status in the Azure portal. Microsoft now recommends that all new deployments use Azure Resource Manager (ARM). Azure Resource Manager allows clients to submit a declarative template written in json that defines all of the cloud resources such as VMs, load balancers, and network interfaces, that need to be created as part of an application or cluster. Dependencies can be defined between resources and the Resource Manager is smart enough to parallelize resource deployment where it can. This can make deploying a new cluster or application much faster than the old model. Azure Resource Manager is essentially the equivalent of CloudFormation on AWS. There are some additional niceties here though such as being able to specify loops in the template. Dealing with conditional resource deployment however, is clunkier in ARM templates than in CloudFormation. Both services suffer from trying to support programming logic from within json. All in all however, ARM deployments are much easier to manage than Classic ones.

The Azure Quickstart Templates project on Github is a great resource for finding ARM templates. Deploying an application is literally as simple as clicking a Deploy to Azure button and filling in a few template parameter values. On the HPC front, there is a handy HPC Pack example available that can be used to provision and setup the scheduler.

However, as we touched on in our original blog post, using HPC Pack may not be the best choice if you are getting started with MPI and simply want to spin up a new MPI cluster, test your application, and then shut everything back down again. While HPC Pack provides the capabilities of a full blown HPC scheduler, this additional power comes at the cost of some resource overhead on the submit node (setting up Active Directory, installing SQL Server, etc). This can be overkill if you just want a one-off cluster to run a MPI application.

Another potentially lighter weight option for running Windows MPI applications in the cloud is the Azure Batch service. Recently, Microsoft announced support for running multi-instance MPI tasks on a pool of VMs. This looks to be a useful option for those who interested in automating the execution of MPI jobs however it does require some investment of developer resources to become familiar with the service before MPI jobs can be run.

We feel there is a still room for an Azure Resource Manager template that 1) launches a bare-bones Windows MPI cluster without the overhead of HPC Pack and 2) allows MPI jobs to be run from the command line or a batch script from any operating system.

On that second point above, another interesting development since our original post is that Microsoft has decided to officially support SSH for remote access. Since that announcement, the pre-release version of the code has been made available on GitHub.

So, given those pieces we decided to put together a simple ARM template to accomplish both of those goals. For someone getting started with MS-MPI, we feel this is a simpler option to getting your code running on a Windows cluster in Azure.

Here is a basic usage example:

  1. Click the Deploy To Azure button from the Github project. Fill in the template parameters. Here, a 2-node Standard_D2 cluster is being provisioned:

    ryan
  2. Make a note of the public IP address assigned to the cluster when the deployment completes.
  3. The template will enable SSH and SFTP on all of the nodes. Upload your application to the first VM in the cluster (N0).  Here we are using the hello world application from this blog post.
  4.  SSH into N0, copy the MPI binary into the shared SMB directory (C:\shared), and run it. Enter your password as the
    argument to the -pwd switch (redacted below). The -savecreds command line argument will securely save your credentials on
    the compute nodes so you don’t have to specify the password in future mpiexec calls. See
    here for more details.

And that’s it! For those that are more GUI-inclined, RDP is also opened up to all of the instances in the MPI cluster. Head on over to the Github project page for more details.

This article was written by Ryan Kaneshiro.