Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (11292)

  • Is There ANYWAY to see the x_test data and labels after the train test split functions operations

    10 février 2023, par MatPar

    I am have been searching google etc for a solution to this challenge for a few weeks now.

    


    What is it ?
I am trying to visualise the data that is being used in the XTEST Variable via the split() function below.

    


    Either via text/string output or the actual image that is being used therein that variable at that given time. Both would be very helpful.

    


    For now I am using 80 videos in a Training 80 : Testing 20 split, where the validation is taking 20% of Training.

    


    I selected various types of data for the training to see how well the model is at predicting the outcome.

    


    So in the end I have just 16 videos for Testing for now.

    


    WHAT I AM TRYING TO SOLVE : IS ==> what those videos are ?!!!

    


    I have no way of knowing what selection of videos were chosen in that group of 16 for processing.

    


    To solve this, I am trying to pass in the video label so that it can present an ID of the specific selection within the XTEST data variable.

    


    WHY I AM DOING THIS
The model is being challenge by a selection of videos that I have no control over.
If I can identify these videos, I can analyse the data and enhance the model's performance accordingly.

    


    The confusion matrix is doing the same, it is presenting me with just 4 misclassifications but what those 4 misclassifications are ? I have no clue.

    


    That is not a good approach, hence me asking these questions.

    


    ** THE CODE ** where I am at

    


    X_train, Xtest, Y_train, Ytest = train_test_split(X, Y, train_size=0.8, test_size=0.2, random_state=1, stratify=Y, shuffle=True)
#print(Xtest)

history = model.fit(X_train, Y_train, validation_split=0.20, batch_size=args.batch,epochs=args.epoch, verbose=1, callbacks=[rlronp], shuffle=True)

predict_labels = model.predict(Xtest, batch_size=args.batch,verbose=1,steps=None,callbacks=None,max_queue_size=10,workers=1,use_multiprocessing=False)
print('This is prediction labels',predict_labels)# This has no video label indentifiers  


    


    This is working fine, but I cannot draw a hypothesis until I see what's within the Xtest variable.

    


    All I am getting is an array of data with no labels.

    


    For example : Xtest has 16 videos after the split operations :

    


    is it vid04.mp4, vid34.mp4, vid21.mp4, vid34.mp4, vid74.mp4, vid54.mp4, vid71.mp4, vid40.mp4, vid06.mp4, vid27.mp4, vid32.mp4, vid18.mp4, vid66.mp4, vid42.mp4, vid8.mp4, vid14.mp4, etc ???!?!??!?!

    


    This is what I really want to see !!!

    


    Please assist me to understand the process and where I am going wrong..
Thanx in advance for acknowledging my challenge !

    


  • ffmpeg messes up variables [duplicate]

    14 février 2023, par poeplva19

    I am trying to split audio files by their chapters. I have downloaded this as audio with yt-dlp with its chapters on. I have tried this very simple script to do the job :

    


    #!/bin/sh

ffmpeg -loglevel 0 -i "$1" -f ffmetadata meta # take the metadata and output it to the file meta
cat meta | grep "END" | awk -F"=" '{print $2}' | awk -F"007000000" '{print $1}' > ends # 
cat meta | grep "title=" | awk -F"=" '{print $2}' | cut -c4- > titles
from="0"
count=1
while IFS= read -r to; do
    title=$(head -$count titles | tail -1)  
    ffmpeg -loglevel 0 -i "$1" -ss $from -to $to -c copy "$title".webm
    echo $from $to
    count=$(( $count+1 ))
    from=$to
done < ends


    


    You see that I echo out $from and $to because I noticed they are just wrong. Why is this ? When I comment out the ffmpeg command in the while loop, the variables $from and $to turn out to be correct, but when it is uncommented they just become some stupid numbers.
Commented output :

    


    0 465
465 770
770 890
890 1208
1208 1554
1554 1793
1793 2249
2249 2681
2681 2952
2952 3493
3493 3797
3797 3998
3998 4246
4246 4585
4585 5235
5235 5375
5375 5796
5796 6368
6368 6696
6696 6961


    


    Uncommented output :

    


    0 465
465 70
70 890
890 08
08 1554
1554 3
3 2249
2249
2952
2952 3493
3493
3998
3998 4246
4246 5235
5235 796
796 6368
6368


    


    I tried lots of other stuff thinking that they might be the problem but they didn't change anything. One I remember is I tried havin $from and $to in the form of %H:%M:%S which, again, gave the same result.
Thanks in advance.

    


  • how to read partial fragmented mp4 from buffer or stdin

    1er mars 2023, par poush

    I am facing a weird challenge. I am still wondering if there's something wrong in my understanding of fragmented mp4 concepts.

    


    I have a buffer stream of a video file, that I am streaming from AWS. Subsequently I am passing it to stdin and using ffmpeg to encode it.

    


    What I want to achieve is if I skip 10000 initial bytes (say) from the source (which is S3 here), I still want to be able to encode the rest of the video in the buffer.

    


    I tried to create a fragmented mp4 (10s) and split the file into chunks of 20MB and now if I try to play any of the chunks except the first one, it doesn't work. I am trying to understand how HLS or Dash uses fragments to jump directly to a part of the video.

    


    I want to mimic basically the HLS player behaviour. Say, if I want to start streaming from S3 bucket from 200000 bytes, then I want to be able encode the video from there.