
Recherche avancée
Autres articles (55)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8433)
-
Is There ANYWAY to see the x_test data and labels after the train test split functions operations
10 février 2023, par MatParI am have been searching google etc for a solution to this challenge for a few weeks now.


What is it ?
I am trying to visualise the data that is being used in the XTEST Variable via the split() function below.


Either via text/string output or the actual image that is being used therein that variable at that given time. Both would be very helpful.


For now I am using 80 videos in a Training 80 : Testing 20 split, where the validation is taking 20% of Training.


I selected various types of data for the training to see how well the model is at predicting the outcome.


So in the end I have just 16 videos for Testing for now.


WHAT I AM TRYING TO SOLVE : IS ==> what those videos are ?!!!


I have no way of knowing what selection of videos were chosen in that group of 16 for processing.


To solve this, I am trying to pass in the video label so that it can present an ID of the specific selection within the XTEST data variable.


WHY I AM DOING THIS
The model is being challenge by a selection of videos that I have no control over.
If I can identify these videos, I can analyse the data and enhance the model's performance accordingly.


The confusion matrix is doing the same, it is presenting me with just 4 misclassifications but what those 4 misclassifications are ? I have no clue.


That is not a good approach, hence me asking these questions.


** THE CODE ** where I am at


X_train, Xtest, Y_train, Ytest = train_test_split(X, Y, train_size=0.8, test_size=0.2, random_state=1, stratify=Y, shuffle=True)
#print(Xtest)

history = model.fit(X_train, Y_train, validation_split=0.20, batch_size=args.batch,epochs=args.epoch, verbose=1, callbacks=[rlronp], shuffle=True)

predict_labels = model.predict(Xtest, batch_size=args.batch,verbose=1,steps=None,callbacks=None,max_queue_size=10,workers=1,use_multiprocessing=False)
print('This is prediction labels',predict_labels)# This has no video label indentifiers 



This is working fine, but I cannot draw a hypothesis until I see what's within the Xtest variable.


All I am getting is an array of data with no labels.


For example : Xtest has 16 videos after the split operations :


is it vid04.mp4, vid34.mp4, vid21.mp4, vid34.mp4, vid74.mp4, vid54.mp4, vid71.mp4, vid40.mp4, vid06.mp4, vid27.mp4, vid32.mp4, vid18.mp4, vid66.mp4, vid42.mp4, vid8.mp4, vid14.mp4, etc ???!?!??!?!


This is what I really want to see !!!


Please assist me to understand the process and where I am going wrong..
Thanx in advance for acknowledging my challenge !


-
Extract luminance data using ffmpeg libavfilter, specifically PIX_FMT_YUV420P type
12 mars 2014, par id128This pertains to ffmpeg 0.7 (yes I know it's old, but data access should be similar).
I am writing a libavfilter to extract the luminance data from each frame. In draw_slice() function I have access to AVFilterLink structure which in turn gives me access to AVFilterBufferRef structure that have uint8_t *data[] pointers. With the PIX_FMT_YUV420P type, I think data[0], data[1], data[2] refers to Y U V channels respectively.
My question is, with the pointer to data[0] (luminance plane), how do I interpret the data ? The pixfmt.h header file states :
PIX_FMT_YUV420P, ///< planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples)
does that mean I have to interpret the luminance plane data every 2 bytes ? Also, what exactly is the datatype for the values pointed to by the pointer - int, float, etc ?
Thanks in advance
-
Passing raw RTMP video data to FFmpeg
24 décembre 2019, par don_amanI am trying to implement a simple RTMP client in Node.js using TCP sockets and FFmpeg. Until now I have implemented the connection to a stream, after which the server starts sending video and audio data messages. According to Wireshark the first byte of such messages contains information related to the type of frame and data encoding (codecs), in my client app video data is in "Sorensen H.263" format.
When I use ffplay to play the stream, it logs the video data is in FLV format, but when passing the packets payload data to ffmpeg using this format I always get this error :
pipe:: Invalid data found when processing input
What parameters should I provide to ffmpeg when piping RTMP video data to get proper video output in any desired format ?