Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (113)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (4765)

  • Video File Size Optimization

    14 mars 2020, par Heba Gamal Eldin

    I’m trying the 2-pass technique of the FFmpeg in python but couldn’t find any python tutorials do this task.
    is there is no way instead of using Subprocess ? if there’s any illustrative example please provide me.

    Note :

    I have tried the 2-pass in the script like that :

    input_fit  = {self.video_in:None}
    output = {None:"-c:v h264 -b:v 260k -pass 1 -an -f mp4 NUL && ^",
             self.video_out:("ffmpeg -i \"%s\" -c:v h264 -b:v 260k -pass 2 " %self.video_in)}
            ## video_out IS The Name of The output File ##
    model = FFmpeg(inputs = input_fit, outputs= output)
    print(model.cmd)

    It Raises an error of

    : : FFRuntimeError : exited with status 1,

    but when i take the generated command and run it on the ffmpeg cmd it runs without errors and generates the video perfectly.
    so anyone just could tell me what is the problem please ?

  • Is There ANYWAY to see the x_test data and labels after the train test split functions operations

    10 février 2023, par MatPar

    I am have been searching google etc for a solution to this challenge for a few weeks now.

    


    What is it ?
I am trying to visualise the data that is being used in the XTEST Variable via the split() function below.

    


    Either via text/string output or the actual image that is being used therein that variable at that given time. Both would be very helpful.

    


    For now I am using 80 videos in a Training 80 : Testing 20 split, where the validation is taking 20% of Training.

    


    I selected various types of data for the training to see how well the model is at predicting the outcome.

    


    So in the end I have just 16 videos for Testing for now.

    


    WHAT I AM TRYING TO SOLVE : IS ==> what those videos are ?!!!

    


    I have no way of knowing what selection of videos were chosen in that group of 16 for processing.

    


    To solve this, I am trying to pass in the video label so that it can present an ID of the specific selection within the XTEST data variable.

    


    WHY I AM DOING THIS
The model is being challenge by a selection of videos that I have no control over.
If I can identify these videos, I can analyse the data and enhance the model's performance accordingly.

    


    The confusion matrix is doing the same, it is presenting me with just 4 misclassifications but what those 4 misclassifications are ? I have no clue.

    


    That is not a good approach, hence me asking these questions.

    


    ** THE CODE ** where I am at

    


    X_train, Xtest, Y_train, Ytest = train_test_split(X, Y, train_size=0.8, test_size=0.2, random_state=1, stratify=Y, shuffle=True)
#print(Xtest)

history = model.fit(X_train, Y_train, validation_split=0.20, batch_size=args.batch,epochs=args.epoch, verbose=1, callbacks=[rlronp], shuffle=True)

predict_labels = model.predict(Xtest, batch_size=args.batch,verbose=1,steps=None,callbacks=None,max_queue_size=10,workers=1,use_multiprocessing=False)
print('This is prediction labels',predict_labels)# This has no video label indentifiers  


    


    This is working fine, but I cannot draw a hypothesis until I see what's within the Xtest variable.

    


    All I am getting is an array of data with no labels.

    


    For example : Xtest has 16 videos after the split operations :

    


    is it vid04.mp4, vid34.mp4, vid21.mp4, vid34.mp4, vid74.mp4, vid54.mp4, vid71.mp4, vid40.mp4, vid06.mp4, vid27.mp4, vid32.mp4, vid18.mp4, vid66.mp4, vid42.mp4, vid8.mp4, vid14.mp4, etc ???!?!??!?!

    


    This is what I really want to see !!!

    


    Please assist me to understand the process and where I am going wrong..
Thanx in advance for acknowledging my challenge !

    


  • vf_dnn_processing.c : add dnn backend openvino

    25 mai 2020, par Guo, Yejun
    vf_dnn_processing.c : add dnn backend openvino
    

    We can try with the srcnn model from sr filter.
    1) get srcnn.pb model file, see filter sr
    2) convert srcnn.pb into openvino model with command :
    python mo_tf.py —input_model srcnn.pb —data_type=FP32 —input_shape [1,960,1440,1] —keep_shape_ops

    See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer
    We'll see srcnn.xml and srcnn.bin at current path, copy them to the
    directory where ffmpeg is.

    I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models

    3) run with openvino backend :
    ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg
    (The input.jpg resolution is 720*480)

    Also copy the logs on my skylake machine (4 cpus) locally with openvino backend
    and tensorflow backend. just for your information.

    $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.tf.mp4

    frame= 343 fps=2.1 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.0706x
    video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.517637%
    [aac @ 0x2f5db80] Qavg : 454.353
    real 2m46.781s
    user 9m48.590s
    sys 0m55.290s

    $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.mp4

    frame= 343 fps=4.0 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.137x
    video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.517640%
    [aac @ 0x31a9040] Qavg : 454.353
    real 1m25.882s
    user 5m27.004s
    sys 0m0.640s

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c