Javier Albarracin

Results 10 comments of Javier Albarracin

You can make them bigger with an simple scale (not AI): for decoded_img in decoded_images: img = Image.fromarray(np.asarray(decoded_img * 255, dtype=np.uint8)) img = img.resize((512,512)) #

# Fix @ issue (spanish keyboard): key Alt + Q : "@"

You need ColabPRO it works with HighRam setting

Also, I added this lines in the beggining to make epochs run: > config = ConfigProto() > config.gpu_options.allow_growth = True > session = InteractiveSession(config=config)

Other thing I just noticed: If I change the "wework example" for anything, like for example this txt: "Microsoft released a new technology for computers. Google and Apple released new...

After modifying the pad_sequences, I had to change the labels to this: > labels = ['none','sport', 'bussiness', 'politics', 'tech', 'entertainment'] adding "none" because prediction is a number from 0 to...

> > Right. labels should be `['none','sport', 'bussiness', 'politics', 'tech', 'entertainment']` > > The last layer outputed for labels 0, 1, 2, 3, 4, 5 although 0 has never been...

OK the answer for the correct label order is here: #after this code: label_tokenizer = Tokenizer() label_tokenizer.fit_on_texts(labels) #you have to add this line: label_index = label_tokenizer.word_index This will create the...

Other thing that you might wonder is "Why" if we are training N classes, we need one more class, and the answer relies in the loss function (loss='sparse_categorical_crossentropy') where you...