Software Engineer (iOS - ForeFlight) 🖥📱, student pilot ✈️, HUGE Colorado Avalanche fan 🥅, entrepreneur (rrainn, Inc.) ⭐️ https://charlie.fish
Yes. It just will fill your feed with a bunch of things you might not care about. But admin vs non admin doesn’t matter in the context of what I said.
Your instance is the one that federates. However it starts with a user subscribing to that content. Your instance won’t federate normally without user interaction.
Normally the solution for the second part is relays. But that isn’t something Lemmy supports currently. This issue is very common with smaller instances. It isn’t as big of a deal with bigger instances since users are more likely to have subscribed to more communities that will automatically be federated to your instance. You could experiment with creating a user and subscribing to a bunch of communities so they get federated to your instance.
It’s not really any different than hosting any other service.
I was lucky to get in in the early days when posting Mastodon handles on Twitter was common so was able to easily migrate. But this is a problem with ActivityPub right now I feel like. Discovery algorithms can be awful in the timeline, but so useful for finding people/communities to follow.
Yep just saw that too after I researched it a bit more. What is strange is I don't remember Eve Energy having a firmware update since then. Makes me wonder if they had it ready to go in previous firmware versions based on internal specs they saw? Or maybe I just forgot about a firmware update I did.
The Tampa Bay Rays will play their 2025 home games at the New York Yankees’ nearby spring training ballpark amid uncertainty about the future of hurricane-damaged Tropicana Field.
but as the Matter standard doesn't yet support energy monitoring, users are limited to basic features like on and off and scheduling
- from this link
Granted the article is almost a year old. But I just didn't realize that Matter now supports energy monitoring. Somehow I just missed that news.
Eve Energy smart plugs transmit Energy information via Matter
I just learned that the Eve Energy smart plugs transmit energy consumption information via Matter. I didn't think energy consumption information was supported in Matter yet, but it is.
This makes them incredible to use with the Home Assistant Energy dashboard.
Even tho I was hesitant for a while, I took the leap to using the Matter beta Home Assistant integration and no issues so far.
Multiple Account Support is here! - Echo 1.4
Super happy to announce the release of multiple account support in Echo v1.4! Easily change between Lemmy accounts (even across multiple instances/servers) in Echo without having to logout of your existing account.
The full release notes are listed below.
```
- Multiple Account support!
- Do you have multiple Lemmy accounts? Maybe across multiple instances? Well now you can sign into all of them in Echo without having to logout of your existing account.
- Requires Echo+ subscription.
- Fixes issue where community list would flash results when opening.
- Adds loading indicator to Explore page after searching.
- Lemmy 0.19.6 support & improvements.
- Fixes issue where in rare cases deleted/removed communities would show in the community list.
- Vast performance improvements.
- More behind the scenes improvements than we can count. ```
In the latest beta of iOS 18.2, Apple upgraded the Find My app with support for sharing a link to...
Best way to determine if a Lemmy server has a pictrs server?
It seems like running a pictrs server is optional when running Lemmy. I'm trying to figure out if a given instance supports pictrs.
I see in the documentation for pictrs, there is a GET /healthz
endpoint. However when I try to access https://lemmy.ml/pictrs/healthz
for example it gives me a 404. Even tho I know that Lemmy.ml has a pictrs server.
What is the best way to determine if a Lemmy server has pictrs?
I'm not aware of any official Ubiquiti certifications. Maybe it was a 3rd party certification? Someone else might know more than I do tho.
I know I'm not necessarily the target audience for this. But it feels too expensive. 6x the price of Cloudflare R2, almost 13x the price of Wasabi. Even iCloud storage is $0.99 for 50 GB with a 5 GB free tier. But again, I know I'm not necessarily the target audience as I have a lot of technical skills that maybe average users don't have.
If you ever get around to building an API, and are interested in partnerships, let me know. Maybe there is a possibility for integration into !echo@eventfrontier.com 😉.
This worked!!! However it now looks like I have to pass in 32 (batch size) comments in order to run a prediction in Core ML now? Kinda strange when I could pass in a single string to TensorFlow to run a prediction on.
Also it seems to be much slower than my Create ML model I was playing with. Went from 0.05 ms on average for the Create ML model to 0.47 ms on average for this TensorFlow model. Looks like this TensorFlow model also is running 100% on the CPU (not taking advantage of GPU or Neural Engine).
Obviously there are some major advantages to using TensorFlow (ie. I can run on a server environment, I can better control stopping training early based on that val_accuracy
metric, etc). But Create ML seems to really win in other areas like being able to pass in a simple string (and not having to worry about tokenization), not having to pass in 32 strings in a single prediction, and the performance.
Maybe I should lower my batch_size? I've heard there are pros and cons to lowering & increasing batch_size. Haven't played around with it too much yet.
Am I just missing something in this analysis?
I really appreciate your help and advice!
Interesting! I'll try this tonight and see how it goes. Really appreciate your reply tho. I'll let you know the outcome.
coremltools Error: ValueError: perm should have the same length as rank(x): 3 != 2
cross-posted from: https://eventfrontier.com/post/177049
> I keep getting an error ValueError: perm should have the same length as rank(x): 3 != 2 when trying to convert my model using coremltools.
>
> From my understanding the most common case for this is when your input shape that you pass into coremltools doesn't match your model input shape. However, as far as I can tell in my code it does match. I also added an input layer, and that didn't help either.
>
> I have put a lot of effort into reducing my code as much as possible while still giving a minimal complete verifiable example. However, I'm aware that the code is still a lot. Starting at line 60 of my code is where I create my model, and train it.
>
> I'm running this on Ubuntu, with NVIDIA setup with Docker.
>
> Any ideas what I'm doing wrong?
>
> ---
>
> python > from typing import TypedDict, Optional, List > import tensorflow as tf > import json > from tensorflow.keras.optimizers import Adam > import numpy as np > from sklearn.utils import resample > import keras > import coremltools as ct > > # Simple tokenizer function > word_index = {} > index = 1 > def tokenize(text: str) -> list: > global word_index > global index > words = text.lower().split() > sequences = [] > for word in words: > if word not in word_index: > word_index[word] = index > index += 1 > sequences.append(word_index[word]) > return sequences > > def detokenize(sequence: list) -> str: > global word_index > # Filter sequence to remove all 0s > sequence = [int(index) for index in sequence if index != 0.0] > words = [word for word, index in word_index.items() if index in sequence] > return ' '.join(words) > > # Pad sequences to the same length > def pad_sequences(sequences: list, max_len: int) -> list: > padded_sequences = [] > for seq in sequences: > if len(seq) > max_len: > padded_sequences.append(seq[:max_len]) > else: > padded_sequences.append(seq + [0] * (max_len - len(seq))) > return padded_sequences > > class PreprocessDataResult(TypedDict): > inputs: tf.Tensor > labels: tf.Tensor > max_len: int > > def preprocess_data(texts: List[str], labels: List[int], max_len: Optional[int] = None) -> PreprocessDataResult: > tokenized_texts = [tokenize(text) for text in texts] > if max_len is None: > max_len = max(len(seq) for seq in tokenized_texts) > padded_texts = pad_sequences(tokenized_texts, max_len) > > return PreprocessDataResult({ > 'inputs': tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)), > 'labels': tf.convert_to_tensor(np.array(labels, dtype=np.int32)), > 'max_len': max_len > }) > > # Define your model architecture > def create_model(input_shape: int) -> keras.models.Sequential: > model = keras.models.Sequential() > > model.add(keras.layers.Input(shape=(input_shape,), dtype='int32', name='embedding_input')) > model.add(keras.layers.Embedding(input_dim=10000, output_dim=128)) # `input_dim` represents the size of the vocabulary (i.e. the number of unique words in the dataset). > model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=64, return_sequences=True))) > model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=32))) > model.add(keras.layers.Dense(units=64, activation='relu')) > model.add(keras.layers.Dropout(rate=0.5)) > model.add(keras.layers.Dense(units=1, activation='sigmoid')) # Output layer, binary classification (meaning it outputs a 0 or 1, false or true). The sigmoid function outputs a value between 0 and 1, which can be interpreted as a probability. > > model.compile( > optimizer=Adam(), > loss='binary_crossentropy', > metrics=['accuracy'] > ) > > return model > > # Train the model > def train_model( > model: tf.keras.models.Sequential, > train_data: tf.Tensor, > train_labels: tf.Tensor, > epochs: int, > batch_size: int > ) -> tf.keras.callbacks.History: > return model.fit( > train_data, > train_labels, > epochs=epochs, > batch_size=batch_size, > callbacks=[ > keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5), > keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1), > # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf` > keras.callbacks.ModelCheckpoint(filepath='./best_model.tf', monitor='val_accuracy', save_best_only=True) > ] > ) > > # Example usage > if __name__ == "__main__": > # Check available devices > print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) > > with tf.device('/GPU:0'): > print("Loading data...") > data = (["I love this!", "I hate this!"], [0, 1]) > rawTexts = data[0] > rawLabels = data[1] > > # Preprocess data > processedData = preprocess_data(rawTexts, rawLabels) > inputs = processedData['inputs'] > labels = processedData['labels'] > max_len = processedData['max_len'] > > print("Data loaded. Max length: ", max_len) > > # Save word_index to a file > with open('./word_index.json', 'w') as file: > json.dump(word_index, file) > > model = create_model(max_len) > > print('Training model...') > train_model(model, inputs, labels, epochs=1, batch_size=32) > print('Model trained.') > > # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf` > model.load_weights('./best_model.tf') > print('Best model weights loaded.') > > # Save model > # I think that .h5 extension allows for converting to CoreML, whereas .keras file extension does not > model.save('./toxic_comment_analysis_model.h5') > print('Model saved.') > > my_saved_model = tf.keras.models.load_model('./toxic_comment_analysis_model.h5') > print('Model loaded.') > > print("Making prediction...") > test_string = "Thank you. I really appreciate it." > tokenized_string = tokenize(test_string) > padded_texts = pad_sequences([tokenized_string], max_len) > tensor = tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)) > predictions = my_saved_model.predict(tensor) > print(predictions) > print("Prediction made.") > > > # Convert the Keras model to Core ML > coreml_model = ct.convert( > my_saved_model, > inputs=[ct.TensorType(shape=(max_len,), name="embedding_input", dtype=np.int32)], > source="tensorflow" > ) > > # Save the Core ML model > coreml_model.save('toxic_comment_analysis_model.mlmodel') > print("Model successfully converted to Core ML format.") >
>
> Code including Dockerfile & start script as GitHub Gist: https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089
coremltools Error: ValueError: perm should have the same length as rank(x): 3 != 2
I keep getting an error ValueError: perm should have the same length as rank(x): 3 != 2 when trying to convert my model using coremltools.
From my understanding the most common case for this is when your input shape that you pass into coremltools doesn't match your model input shape. However, as far as I can tell in my code it does match. I also added an input layer, and that didn't help either.
I have put a lot of effort into reducing my code as much as possible while still giving a minimal complete verifiable example. However, I'm aware that the code is still a lot. Starting at line 60 of my code is where I create my model, and train it.
I'm running this on Ubuntu, with NVIDIA setup with Docker.
Any ideas what I'm doing wrong?
---
```python from typing import TypedDict, Optional, List import tensorflow as tf import json from tensorflow.keras.optimizers import Adam import numpy as np from sklearn.utils import resample import keras import coremltools as ct
Simple tokenizer function
word_index = {} index = 1 def tokenize(text: str) -> list: global word_index global index words = text.lower().split() sequences = [] for word in words: if word not in word_index: word_index[word] = index index += 1 sequences.append(word_index[word]) return sequences
def detokenize(sequence: list) -> str: global word_index # Filter sequence to remove all 0s sequence = [int(index) for index in sequence if index != 0.0] words = [word for word, index in word_index.items() if index in sequence] return ' '.join(words)
Pad sequences to the same length
def pad_sequences(sequences: list, max_len: int) -> list: padded_sequences = [] for seq in sequences: if len(seq) > max_len: padded_sequences.append(seq[:max_len]) else: padded_sequences.append(seq + [0] * (max_len - len(seq))) return padded_sequences
class PreprocessDataResult(TypedDict): inputs: tf.Tensor labels: tf.Tensor max_len: int
def preprocess_data(texts: List[str], labels: List[int], max_len: Optional[int] = None) -> PreprocessDataResult: tokenized_texts = [tokenize(text) for text in texts] if max_len is None: max_len = max(len(seq) for seq in tokenized_texts) padded_texts = pad_sequences(tokenized_texts, max_len)
return PreprocessDataResult({ 'inputs': tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)), 'labels': tf.convert_to_tensor(np.array(labels, dtype=np.int32)), 'max_len': max_len })
Define your model architecture
def create_model(input_shape: int) -> keras.models.Sequential: model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(input_shape,), dtype='int32', name='embedding_input'))
model.add(keras.layers.Embedding(input_dim=10000, output_dim=128)) # input_dim
represents the size of the vocabulary (i.e. the number of unique words in the dataset).
model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=64, return_sequences=True)))
model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=32)))
model.add(keras.layers.Dense(units=64, activation='relu'))
model.add(keras.layers.Dropout(rate=0.5))
model.add(keras.layers.Dense(units=1, activation='sigmoid')) # Output layer, binary classification (meaning it outputs a 0 or 1, false or true). The sigmoid function outputs a value between 0 and 1, which can be interpreted as a probability.
model.compile( optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'] )
return model
Train the model
def train_model(
model: tf.keras.models.Sequential,
train_data: tf.Tensor,
train_labels: tf.Tensor,
epochs: int,
batch_size: int
) -> tf.keras.callbacks.History:
return model.fit(
train_data,
train_labels,
epochs=epochs,
batch_size=batch_size,
callbacks=[
keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5),
keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1),
# When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from ./best_model.keras
to ./best_model.tf
keras.callbacks.ModelCheckpoint(filepath='./best_model.tf', monitor='val_accuracy', save_best_only=True)
]
)
Example usage
if name == "main": # Check available devices print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
with tf.device('/GPU:0'): print("Loading data...") data = (["I love this!", "I hate this!"], [0, 1]) rawTexts = data[0] rawLabels = data[1]
# Preprocess data processedData = preprocess_data(rawTexts, rawLabels) inputs = processedData['inputs'] labels = processedData['labels'] max_len = processedData['max_len']
print("Data loaded. Max length: ", max_len)
# Save word_index to a file with open('./word_index.json', 'w') as file: json.dump(word_index, file)
model = create_model(max_len)
print('Training model...') train_model(model, inputs, labels, epochs=1, batch_size=32) print('Model trained.')
# When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from ./best_model.keras
to ./best_model.tf
model.load_weights('./best_model.tf')
print('Best model weights loaded.')
# Save model # I think that .h5 extension allows for converting to CoreML, whereas .keras file extension does not model.save('./toxic_comment_analysis_model.h5') print('Model saved.')
my_saved_model = tf.keras.models.load_model('./toxic_comment_analysis_model.h5') print('Model loaded.')
print("Making prediction...") test_string = "Thank you. I really appreciate it." tokenized_string = tokenize(test_string) padded_texts = pad_sequences([tokenized_string], max_len) tensor = tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)) predictions = my_saved_model.predict(tensor) print(predictions) print("Prediction made.")
# Convert the Keras model to Core ML coreml_model = ct.convert( my_saved_model, inputs=[ct.TensorType(shape=(max_len,), name="embedding_input", dtype=np.int32)], source="tensorflow" )
# Save the Core ML model coreml_model.save('toxic_comment_analysis_model.mlmodel') print("Model successfully converted to Core ML format.") ```
Code including Dockerfile & start script as GitHub Gist: https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089
Got it. Thanks for the reply! So is Keras just a dependency used in TensorFlow?
From what I've seen TensorFlow is still more popular. But that might be starting to change. Maybe we need to make a PyTorch community as well 🤔
TensorFlow Lemmy Community
Discussion, questions, news, and more about the TensorFlow [https://www.tensorflow.org] machine learning library.
I created a Lemmy community specifically for TensorFlow! Check it out and subscribe if you're interested.
Echo Status Update - early November 2024
I wanted to provide the community with a quick status update on the development of Echo. This is the longest stretch without an update since Echo was released. This is mostly because I'm currently working on roughly 6+ major new features for Echo that are all in varying stages of completion. (Also because this past week my computer was being repaired, so that took away from being able to work on Echo).
I hope to wrap up at least 1 of these features and get that shipped hopefully this coming week.
Overtime I do anticipate release frequency to slow down. But as part of my goal to build the best Lemmy client for iOS, releases will still occur will regular frequency.
Thank you to everyone who has downloaded the app so far. And to everyone who has given feedback, I really appreciate it. All of your feedback has been heard, and I'm actively working to implement most of it into the application. Stay tuned!
Were there major performance improvements between 2.12.0 and 2.18.0?
I had to downgrade from TensorFlow 2.18.0 to 2.12.0 recently so that I can turn my model into a CoreML model. And coremltools only supports TensorFlow 2.12.0.
After doing that, training my model is taking roughly 3-4x longer than it did on 2.18.0.
I wish it worked on more webpages. But totally agree.
Dodgers take Game 2 as series shifts to NY
Dodgers beat the Yankees 4-2 as the series shifts to Yankee Stadium.
Are the Yankees in desperation mode yet? Judge doesn’t look good at the plate.
Walk it off! - Dodgers take game 1
Dodgers take game 1 on a Freddie Freeman grand slam in extra innings. Final: 6-3.
Ding 🔔 Echo Push Notification support has arrived!
Notifications have arrived to Echo! With version 1.3 available now in the app store you can enable push notifications to receive updates on new posts to communities, or new comments on posts. More push notification options are coming in the future.
What push notification options would you like to see?
After updating to Echo v1.3, in order to enable push notifications for a community or post simply navigate to the community or post you wish you enable notifications for and tap the ellipsis icon, and choose to enable notifications.
Please note that you must be subscribed to Echo+ in order to enable & receive push notifications.
Full release notes:
- Ding Push Notification support is here!
- Currently supports subscribing to new comments on a post & subscribing to new posts within a community. More to come.
- Requires Echo+ subscription.
- Now able to view Echo+ benefits even while subscribed.
- General bug fixes, performance improvements, and behind the scenes improvements.
- Updated privacy policy & terms of service.
Dodgers vs Yankees World Series
Game 1: Friday October 25th @ LAD
Game 2: Saturday October 26th @ LAD
Game 3: Monday October 28th @ NYY
Game 4: Tuesday October 29th @ NYY
Game 5 (if needed): Wednesday October 30th @ NYY
Game 6 (if needed): Friday November 1st @ LAD
Game 7 (if needed): Saturday November 2nd @ LAD
---
All games on Fox & start at 8:08pm eastern time/5:08pm pacific time.
Comment design is on my todo list for a refresh. I thought the design was going to work but after using it myself, it doesn’t hit the mark.
Right now it’s a drawer at the bottom of the post view that you can pull up to comment.
If you want to reply to a comment you should be able to swipe a comment from the left to the right and that’ll mark it as reply to.
Right now you must be subscribed to Echo+ in order to comment.
Thanks so much for trying it out! Much much more to come, so stay tuned.
As for the refresh thing, thanks for the report. It’s on my list to resolve. I’ll add a +1 to that item to bump it up on the priority list. Not quite sure when it’ll be resolved, but hopefully soon.
Thank you so much for checking it out! I really appreciate the feedback. I am considering a few ideas to revamp the subscription. No guarantees yet, but stay tuned to this community for updates.
What? I'm not following. Steam isn't federating with anyone. This is about having a link to an external site. Nothing more. Has nothing to do with federation directly.
That is so bad. They clearly don’t understand the appeal of decentralized systems…
Just added to my todo list. Hopefully I'll get around to this today! Thanks!!