Recherche avancée

Médias (91)

Autres articles (108)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (11484)

  • avcodec : add prores_metadata bsf for set the color property of each prores frame

    28 octobre 2018, par Martin Vignali
    avcodec : add prores_metadata bsf for set the color property of each prores frame
    
    • [DH] doc/bitstream_filters.texi
    • [DH] libavcodec/Makefile
    • [DH] libavcodec/bitstream_filters.c
    • [DH] libavcodec/prores_metadata_bsf.c
  • When using the exiftool `-v` command, Rotation property is misisng

    4 juin 2023, par SShah

    I am still not certain if this is the right place to ask, I am currently attempting to backup 60gb+ size videos on my phone to pc, but while doing so I am also trying to leverage FFMPEG and/or exiftool to convert the files to HVEC format (for reduced file size) and to ensure all relevant metadata is copied over from the source file to the converted file.

    


    My question to be specific is related to exiftool in particular where I noticed a discrepancy and was wondering if someone can assist me :

    


    I am currently converting the video files using the following ffmpeg command :

    


    ffmpeg -hwaccel qsv -i "VID_20220629_201116.mp4" -crf 27 -movflags use_metadata_tags -c copy -c:v hevc_qsv -global_quality 25 -vf scale_qsv=1920:1080 -c:a copy -preset slow "VID_20220629_201116_h265 (2).mp4"

    


    This does convert the file to H265 but the final file only copies over some datestamps and misses to add/update additional related tags (such as gps) that my phone has added to the source file. But more annoyingly its not detecting the orientation of the source video (which is in portrait) and producing an output in landscape.

    


    I have then written the following python script where I use the the PyExifTool module to try and update the file with any missing tags from the original file.

    


    import exiftool
from pprint import pprint
import json
from datetime import datetime

video = exiftool.ExifTool()

# Get the path to the file
source_file_path = r"\VID_20220629_201116.mp4"
destination_file_path  = r"\VID_20220629_201116_h265 (2).mp4"

print(f"Source File Path: {source_file_path}")
print(f"Destination File Path: {destination_file_path}\n")

with exiftool.ExifToolHelper() as et:
    source_file_metadata: list = et.get_tags(source_file_path, "All")[0]   
    destination_file_metadata: list = et.get_tags(destination_file_path, "All")[0]
    
    keys_to_check = [];
    source_file_tag_names = source_file_metadata.keys()
    destination_file_tag_names = destination_file_metadata.keys()
    tags_to_update = {}
    
    with open("source.json", "w") as f:
        f.write(json.dumps(source_file_metadata, indent=4, sort_keys=True))
        # pprint(source_file_metadata, f)
    
    with open("destination.json", "w") as f:
        f.write(json.dumps(destination_file_metadata, indent=4, sort_keys=True))
        # pprint(destination_file_metadata, f)
    
    for tag in source_file_tag_names:
        if tag not in destination_file_tag_names or "date" in tag.lower():
            keys_to_check.append(tag)
            
            if tag in destination_file_metadata: 
                if destination_file_metadata[tag] == source_file_metadata[tag]:
                    continue
                
            tags_to_update[tag] = source_file_metadata[tag]

    pprint(f"Tags to Update: {tags_to_update}")
    
    if tags_to_update:
        print(et.set_tags(destination_file_path, tags_to_update))


    


    This script compares the destination file, with the source file and updates the destination file with any tags that belong in the source file, but not in the destintation file.

    


    In regards to the orientation issue I have, I noticed that if I manually run the following command in the command line :

    


    exiftool "source_file_name.mp4"

    


    Then in the output it generates, it displays the property Rotation as follows :

    


    Rotation                        : 90

    


    However, this does'nt seem to be captured from my script and therefore I can not see it in the destination file after I run my script. I also suspect this property will help fix my orientation issue that I am currently facing with ffmpeg.

    


    So after looking deeper into exiftool I found that I can add -v to the comamand to display output as variable names :
exiftool -v "source_file_name.mp4"

    


    After running the above command, from the output I see no variable called or associated to attribute Rotation, and this is why I believe my script is unable to apply to the destination file either.

    


    Sorry I understand my description is long, and I appreciate you taking the time to review it, please let me know if there is a way I can map this Rotation value, and/or if you think there is a much better solution for me to map all the metatags from the source file to the converted destination file.

    


    Thank you.

    


  • My C++ software uses ffmpeg in streaming mode, but I want to decode the data as quickly as possible, how do I do that ?

    14 septembre 2022, par Alexis Wilke

    When I run ffmpeg in my command line to convert an m4a file to mp3, it says ×45 to ×55 for the speed at which is decodes the input data. In this case, the 45 minute audio file gets converter in 2 minutes.

    


    When I run the same process through my C++ code, I stream the data. This is because my code accepts data coming from the network so often it will a little faster to do streaming (unfortunately, not with m4a data since the header is placed at the end of the file...)

    


    However, there seems to be something in ffmpeg that makes it think that if I want to stream the data it needs to be done in realtime. In other words, the frames come through at a speed of ×1 instead of the possible average of ×50.

    


    Is there a flag/setup that I need to turn ON or OFF so the streaming process goes ×50 or so ?

    



    


    I allocate the context like so :

    


    size_t const avio_buffer_size(4096);&#xA;unsigned char * avio_buffer(reinterpret_cast<unsigned char="char">(av_malloc(avio_buffer_size)));&#xA;AVIOContext * context(avio_alloc_context(&#xA;                          avio_buffer&#xA;                        , avio_buffer_size&#xA;                        , 0             // write flag&#xA;                        , this          // opaque&#xA;                        , decoder_read_static&#xA;                        , nullptr       // write func.&#xA;                        , decoder_seek_static));&#xA;</unsigned>

    &#xA;

    To do the streaming, I use custom I/O in the context :

    &#xA;

    AVFormatContext * format_context(avformat_alloc_context());&#xA;format_context->pb = context;&#xA;format_context->flags |= AVFMT_FLAG_CUSTOM_IO;&#xA;avformat_open_input(&#xA;          &amp;format_context&#xA;        , "input"           // filename (unused)&#xA;        , nullptr           // input format&#xA;        , nullptr);         // options&#xA;

    &#xA;

    Next I get the audio stream index :

    &#xA;

    avformat_find_stream_info(format_context, nullptr);&#xA;AVCodec * decoder_codec(nullptr);&#xA;int const index(av_find_best_stream(&#xA;              format_context&#xA;            , AVMEDIA_TYPE_AUDIO&#xA;            , -1            // wanted stream number&#xA;            , -1            // related stream&#xA;            , &amp;decoder_codec&#xA;            , 0));          // flags&#xA;

    &#xA;

    That has the side effect of telling us which decoder to use :

    &#xA;

    AVCodecContext * decoder_context = avcodec_alloc_context3(decoder_codec);&#xA;avcodec_parameters_to_context(&#xA;          decoder_context&#xA;        , format_context->streams[index]->codecpar);&#xA;avcodec_open2(&#xA;              decoder_context&#xA;            , decoder_codec&#xA;            , nullptr);        // options&#xA;

    &#xA;

    And finally, I loop through the frames :

    &#xA;

    AVFrame *frame(av_frame_alloc());&#xA;AVPacket av_packet;&#xA;av_init_packet(&amp;av_packet);&#xA;for(;;)&#xA;{&#xA;    av_read_frame(format_context, &amp;av_packet);&#xA;    if(end-detected)&#xA;    {&#xA;        break;&#xA;    }&#xA;    if(av_packet.stream_index != index) continue;&#xA;    avcodec_send_packet(decoder_context, &amp;av_packet);&#xA;    for(;;)&#xA;    {&#xA;        avcodec_receive_frame(decoder_context, frame);&#xA;        if(end-detected)&#xA;        {&#xA;            break;&#xA;        }&#xA;        ...copy data from frame...&#xA;        av_frame_unref(frame);&#xA;    }&#xA;    av_packet_unref(&amp;av_packet);&#xA;}&#xA;

    &#xA;

    Note : I don't show all the error handling, use of RAII, etc. in an attempt to show the code in its simplest form.

    &#xA;