
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (83)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (15288)
-
Extract luminance data using ffmpeg libavfilter, specifically PIX_FMT_YUV420P type
12 mars 2014, par id128This pertains to ffmpeg 0.7 (yes I know it's old, but data access should be similar).
I am writing a libavfilter to extract the luminance data from each frame. In draw_slice() function I have access to AVFilterLink structure which in turn gives me access to AVFilterBufferRef structure that have uint8_t *data[] pointers. With the PIX_FMT_YUV420P type, I think data[0], data[1], data[2] refers to Y U V channels respectively.
My question is, with the pointer to data[0] (luminance plane), how do I interpret the data ? The pixfmt.h header file states :
PIX_FMT_YUV420P, ///< planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples)
does that mean I have to interpret the luminance plane data every 2 bytes ? Also, what exactly is the datatype for the values pointed to by the pointer - int, float, etc ?
Thanks in advance
-
Is There ANYWAY to see the x_test data and labels after the train test split functions operations
10 février 2023, par MatParI am have been searching google etc for a solution to this challenge for a few weeks now.


What is it ?
I am trying to visualise the data that is being used in the XTEST Variable via the split() function below.


Either via text/string output or the actual image that is being used therein that variable at that given time. Both would be very helpful.


For now I am using 80 videos in a Training 80 : Testing 20 split, where the validation is taking 20% of Training.


I selected various types of data for the training to see how well the model is at predicting the outcome.


So in the end I have just 16 videos for Testing for now.


WHAT I AM TRYING TO SOLVE : IS ==> what those videos are ?!!!


I have no way of knowing what selection of videos were chosen in that group of 16 for processing.


To solve this, I am trying to pass in the video label so that it can present an ID of the specific selection within the XTEST data variable.


WHY I AM DOING THIS
The model is being challenge by a selection of videos that I have no control over.
If I can identify these videos, I can analyse the data and enhance the model's performance accordingly.


The confusion matrix is doing the same, it is presenting me with just 4 misclassifications but what those 4 misclassifications are ? I have no clue.


That is not a good approach, hence me asking these questions.


** THE CODE ** where I am at


X_train, Xtest, Y_train, Ytest = train_test_split(X, Y, train_size=0.8, test_size=0.2, random_state=1, stratify=Y, shuffle=True)
#print(Xtest)

history = model.fit(X_train, Y_train, validation_split=0.20, batch_size=args.batch,epochs=args.epoch, verbose=1, callbacks=[rlronp], shuffle=True)

predict_labels = model.predict(Xtest, batch_size=args.batch,verbose=1,steps=None,callbacks=None,max_queue_size=10,workers=1,use_multiprocessing=False)
print('This is prediction labels',predict_labels)# This has no video label indentifiers 



This is working fine, but I cannot draw a hypothesis until I see what's within the Xtest variable.


All I am getting is an array of data with no labels.


For example : Xtest has 16 videos after the split operations :


is it vid04.mp4, vid34.mp4, vid21.mp4, vid34.mp4, vid74.mp4, vid54.mp4, vid71.mp4, vid40.mp4, vid06.mp4, vid27.mp4, vid32.mp4, vid18.mp4, vid66.mp4, vid42.mp4, vid8.mp4, vid14.mp4, etc ???!?!??!?!


This is what I really want to see !!!


Please assist me to understand the process and where I am going wrong..
Thanx in advance for acknowledging my challenge !


-
AVFrame : How to get/replace plane data buffer(s) and size ?
19 juillet 2018, par user10099431I’m working on gstreamer1.0-libav (1.6.3), trying to port custom FPGA based H264 video acceleration from gstreamer 0.10.
The data planes (YUV) used to be allocated by a simple malloc back in gstreamer 0.10, so we simply replaced the AVFrame.data[i] pointers by pointers to memory in our video acceleration core. It seems to be MUCH more complicated in gstreamer 1.12.
For starters, I tried copying the YUV planes from AVFrame.data[i] to a separate buffer - which worked fine ! Since I haven’t seen an immediate way to obtain the size of AVFrame.data[i] and I recognized that data[0], data[1], data[2] seem to be in a single continuous buffer, I simply used (data[1] - data [0]) for the size of the Y plane and (data[2] - data[1]) for the sizes of the U/V planes respectively. This works fine, expect for one scenario :
- Input H264 stream with resolution of 800x600 or greater
- The camera is covered (jacket, hand, ...)
This causes a SEGFAULT in the memcpy of the V plane (data[2]) using the sizes determined as described above. Before covering the camera, the stream is displayed completely fine ... so for some reason the dark screen changes the plane sizes ?
My ultimate goal is replacing the data[i] pointers allocated by gstreamer by my custom memory allocation (for futher processing) ... where exactly are these buffers assigned, can I change them and how can I obtain the size of each plane (data[0], data[1], data[2]) ?