Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (55)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (11110)

  • Confusion about PTS in video files and media streams

    11 novembre 2014, par user2452253

    Is it possible that the PTS of a particular frame in a file is different with the PTS of the same frame in the same file while it is being streamed ?

    When I read a frame using av_read_frame I store the video stream in an AVStream. After I decode the frame with avcodec_decode_video2, I store the time stamp of that frame in an int64_t using av_frame_get_best_effort_timestamp. Now if the program is getting its input from a file I get a different timestamp from when I stream the input (from the same file) to the program.

    To change the input type I simply change the argv argument from "/path/to/file.mp4" to something like "udp ://localhost:1234", then I stream the file with ffmpeg in command line : "ffmpeg -re -i /path/to/file.mp4 -f mpegts udp ://localhost:1234". Can it be because the "-f mpegts" arguments change some characteristics of the media ?

    Below is my code (simplified). By reading the ffmpeg mailing list archives I realized that the time_base that I’m looking for is in the AVStream and not the AVCodecContext. Instead of using av_frame_get_best_effort_timestamp I have also tried using the packet.pts but the results don’t change.
    I need the time stamps to have a notion of frame number in a streaming video that is being received.
    I would really appreciate any sort of help.

    //..
    //argv[1]="/file.mp4";
    argv[1]="udp://localhost:7777";
    // define AVFormatContext, AVFrame, etc.
    // register av, avcodec, avformat_network_init(), etc.
    avformat_open_input(&pFormatCtx, argv, NULL, NULL);
    avformat_find_stream_info(pFormatCtx, NULL);
    // find the video stream...
    // pointer to the codec context...
    // open codec...
    pFrame=av_frame_alloc();
    while(av_read_frame(pFormatCtx, &packet)>=0) {
           AVStream *strem = pFormatCtx->streams[videoStream];
           if(packet.stream_index==videoStream) {
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
               if(frameFinished) {
                   int64_t perts = av_frame_get_best_effort_timestamp(pFrame);
                   if (isMyFrame(pFrame)){
                        cout << perts*av_q2d(strem->time_base) << "\n";
                   }
                }
    }
    //free allocated space
    }
    //..
  • Live video encoding using...?

    27 décembre 2013, par Basic

    I'm attempting to write a fairly simplistic application that will stream video/audio from a webcam to someone else across the internet (ala Skypebut with more control).

    There seems to be very little useful/relevant information on the subjectand what I can find is largely outdated. From my research so far, x264 seems to be the way to go as it offers an ultrafast option which is designed for this situation

    I'm able to turn on the webcam and receive a stream of images. I can also listen on an audio device and get samples.

    Where I'm failing is encoding that information in such a way as to be able to stream with a minimum of latency (from what I've read, 200ms delay is the goal for no obvious lag, including network latency - so let's aim for 100-150ms)

    Things I've tried

    ffmpeg

    This seems to be the most widely used option for encoding. I've had two real issues using it. Firstly, even using x264 with no look-aheads and the bare minimum buffers for stability, the delay seems to be on the order of 700ms using image2pipe. Secondly, it requires ffmpeg to be installed - being able to do this without an external dependency would be nice.

    VLC

    As with ffmpeg this requires an external program which is a negative. Even worse, I can't seem to get a latency of under 2 seconds which seems to increase over time. I've also only been able to get VLC to capture the camera itself rather than take a stream of images which means I don't get a chance to pre-process them.

    DirectShow

    I've seen a number of sites recommending using the windows direct show encoders but I haven't been able to find one that works at anything like real time. In fact, the only one I've managed to get going reliably is a Windows Media codec that has a massive latency and fairly large size.

    Other considerations

    None of the above address the problem of adding an audio stream to the video. I'm not sure if I should attempt to encode them together or send a separate stream alongside the video.

    In short, I've been Googling for a week or so now and haven't found a decent way to do this. Can someone please point me at a decent example/guide ?

  • Why does ffmpeg return "No such file or directory"

    4 juillet 2020, par JackNewman

    I'm trying to split a video file into 2 second increments and then merge the video back together.

    


    source_vid_path = r"C:\SplitAndMergeVids\Before\before.mp4"
ffcat_path = r'C:\SplitAndMergeVids\Chunk\video.ffcat'
chunks_path = r'C:\SplitAndMergeVids\Chunk\chunk-%03d.mp4'
segments_time = '2'
cmd_input = rf'ffmpeg -fflags +genpts -i {source_vid_path} -map 0 -c copy -f segment -segment_format mp4 -segment_time {segments_time} -segment_list {ffcat_path} -reset_timestamps 1 -v error {chunks_path}'
output = str(subprocess.run(cmd_input, shell=True, capture_output=True))
print(output)

output_path = r'C:\SplitAndMergeVids\Output\output.mp4'
second_input = rf'ffmpeg -y -v error -i {ffcat_path} -map 0 -c copy {output_path}'
output = str(subprocess.run(second_input, shell=True, capture_output=True))
print(output)


    


    First subprocess runs perfectly although the second outputs returns

    


    "Impossible to open 'chunk-000.mp4'\r\nC:\\SplitAndMergeVids\\Chunk\\video.ffcat: No such file or directory".


    


    Full output looks like

    


    CompletedProcess(args='ffmpeg -fflags +genpts -i C:\\SplitAndMergeVids\\Before\\before.mp4 -map 0 -c copy -f segment -segment_format mp4 -segment_time 2 -segment_list C:\\SplitAndMergeVids\\Chunk\\video.ffcat -reset_timestamps 1 -v error C:\\SplitAndMergeVids\\Chunk\\chunk-%03d.mp4', returncode=0, stdout=b'', stderr=b'')
CompletedProcess(args='ffmpeg -y -v error -i C:\\SplitAndMergeVids\\Chunk\\video.ffcat -map 0 -c copy C:\\SplitAndMergeVids\\Output\\output.mp4', returncode=1, stdout=b'', stderr=b"[concat @ 0000028691a6c6c0] Impossible to open 'chunk-000.mp4'\r\nC:\\SplitAndMergeVids\\Chunk\\video.ffcat: No such file or directory\r\n")


    


    When I run cmd_input and second_input manually in cmd, everything functions perfectly. I don't understand how in the first command I am making a file at "ffcat_path", then in the second command I'm using the same "ffcat_path" and it returns "No such file or directory" when it certainly does exist.