File exceeds the maximum upload size of 100MB. Please try a smaller size.

Full name✱

Email✱

Phone

Current company

Links

LinkedIn URL

Twitter URL

GitHub URL

Portfolio URL

Other website

Questions for Applicants - Machine Learning Engineer (required)

To design a neural network based predictor for f(x) = sin(x) we designed the following network. Input ----> Dense(num_neurons=1) ---> Relu() ----> Dense(num_neurons=100) ---> Relu() ---> Dense(num_neurons=1) ----> Relu()--> Output Our choice of loss was L2 and optimizer was standard gradient descent. The predictor was trained on X_train = numpy.arange(0.0, 314.1, 0.1) and Y_train = numpy.sin(X_train). It is subsequently tested on X_test= numpy.arange(-10.0, 10.0, 0.001) and Y_test = numpy.sin(X_test) The predictor however performs badly on the test data. What could have gone wrong?✱

The training data is too small to train the network. Adding more data especially when X < 0

Relu activations used in the network are wrong. Changing the activation in the output node from Relu to Linear should help network learn the function better.

Network is not deep enough. Adding more hidden units/layers to the network will help the network generalize

L2 loss is not a good metric to measure loss when regressing for sin(x)

Gradient descent is not the best optimizer for this problem. Adam optimizer would be a better option.

This type of neural network architecture can fundamentally not learn a function like sin(x).