




-
GPU vs. CPU comparison
...A straightforward approach would be using CPU multithreading in order to run all of the computation in parallel. However, when it comes to deep learning models, we are dealing with massive vectors, with millions of elements. A common CPU can only handle around a dozen threads simultaneously. That’s when the GPUs come into action! Modern GPUs can run millions of threads simultaneously, enhancing performance of these mathematical operations on massive vectors...
Jun25 -
Bedrock Titan AI is often on the fringes of the terms of use
Be careful when requesting a simple image from Bedrock, it is very sensitive.
Amazon's AI appears to return content that does not comply with the terms of use.
We ask "amazon.titan-image-generator-v1" to generate 3 images following this request: "Show me a unicorn in a fairytale environment."
It return the shown picture, and failed for the 2 which have not met their standard of use
Jun23 -
Machine learning on quantum hardware. Connect to quantum hardware using PyTorch, TensorFlow, JAX, Keras, or NumPy. Build rich and flexible hybrid quantum-classical models.
Good to try on AWS Bracket
Jun18 -
The attention mechanism, initially used in natural language processing fields, has found its way into finance and other domains. It operates on a simple concept: some parts of the input sequence are more important then others. The attention mechanism improve models and understand's capabilities, by allowing the model to focus on specific parts of the input sequence while ignoring others.
Incorporating attention into LSTM networks give a context to the model for predictions, certain historical data points may be more relevant than others. The attention mechanism give the LSTM ability to weigh these points more heavily, leading to more accurate and nuanced predictions.
Prepare de Data ...
from keras.models import Sequential from keras.layers import LSTM, Dense, Dropout, AdditiveAttention, Permute, Reshape, Multiply model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1], 1))) model.add(LSTM(units=50, return_sequences=True, name="lstm_layer"))
attention = AdditiveAttention() attention_result = attention([model.outputs[0], model.outputs[0]]) attention_result = Multiply()([model.outputs[0], attention_result]) model.add(tf.keras.layers.Flatten()) model.add(Dense(1))
Add the Dropout and BatchNormalization before Compiling ...
Train, Predict and Visualize
Jun1