Recherche avancée

Médias (1)

Mot : - Tags -/vidéo

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (9500)

  • There is a video stream option and player. But can't find where to manage video and stream

    26 janvier 2018, par Chanaka De Silva

    I’m working on this repository. in this application, we can upload a video taken by a action camera or drone, and it uses GPS coordinates (lat,lang) and draw the path of the video in the map. And also we can play the video via the system too after upload.

    Bt the size of the video is too much high. So it take a lot of time to process video and also download and play when we host in a server. I wanted to reduce size. So I write this code.

    try {
    new ffmpeg('demo.mp4', function (err, video) {
       if (!err) {
           video
               .setVideoSize('50%', true, false)
               .setVideoFrameRate(30)
               .save('output.mp4', function (error, file) {
                   if (error) {
                       console.log("error : ", error);
                   }
                   if (!error)
                       console.log('Video file: ' + file);
               });
           console.log('The video is ready to be processed');
       } else {
           console.log('Error: ' + err);
       }
    });

    }

    But the issue is I can’t find where is the video coming from in this application. We need to pass like "demo.mp4" to my code as you can see. This is the index file if this application : https://github.com/chanakaDe/replay-lite/blob/master/frontend/index.html

    And this is the player file : https://github.com/chanakaDe/replay-lite/blob/master/frontend/player.js

    This is the file called api.js : https://github.com/chanakaDe/replay-lite/blob/master/frontend/api.js

    This file also has some video functions : https://github.com/chanakaDe/replay-lite/blob/master/gopro/index.js

    Please guys, sometimes you will see this questions as a silly question. But you have the repo to check and can you guys please check and let me know where to get a video file or stream to add my size reduce code ?

  • Syncing Audio with Video iOS RTSP Player

    3 octobre 2015, par Dave Thomas

    I am combining two different classes from two different git projects to create an RTSP streamer for an iOS live streaming application.

    edit : Agree with the -1 this question is probably a shot in the dark. But, to answer the question if I am asked "why I am not using entirely the DFURTSPPlayer library ?" Because I would rather use the YUV display with opengl of the second project, hackcam, rather than decode the video frames into a UIImages like DFURTS does. Hackcam does not have audio

    Also please comment if you down vote, at least help me find an answer by telling me what I need to refine to be clear or point out if this question is inappropriate

    My current issue is that the audio playback has about a 1 second latency, and is out of sync with the video which is close to real time.

    I know that the audio is in sync because I’ve tested the RTSP streams in VLC. Something is wrong with my implementation. Mostly frankensteining these too projects together and the fact that I am not familiar with ffmpeg c library or AudioQueue for iOS.

    Any help would be greatly appreciated !

    I’ve taken the AudioStreamer class from this repository :
    https://github.com/durfu/DFURTSPPlayer

    https://github.com/durfu/DFURTSPPlayer/blob/master/DFURTSPPlayer/DFURTSPPlayer/FFMpegDecoder/AudioStreamer.m

    And I am trying to get it to work with this one :
    https://github.com/hackacam/ios_rtsp_player/blob/master/src/FfmpegWrapper.m

    I can post more code if needed, but my main loop in FfmpegWrapper now looks like this (_audioController is reference to AudioStreamer.m) :

    -(int) startDecodingWithCallbackBlock: (void (^) (AVFrameData *frame)) frameCallbackBlock
                         waitForConsumer: (BOOL) wait
                      completionCallback: (void (^)()) completion
    {
       OSMemoryBarrier();
       _stopDecode=false;
       dispatch_queue_t decodeQueue = dispatch_queue_create("decodeQueue", NULL);
       dispatch_async(decodeQueue, ^{
           int frameFinished;
           OSMemoryBarrier();
           while (self->_stopDecode==false){
               @autoreleasepool {
                   CFTimeInterval currentTime = CACurrentMediaTime();
                   if ((currentTime-_previousDecodedFrameTime) > MIN_FRAME_INTERVAL &&
                       av_read_frame(_formatCtx, &_packetFFmpeg)>=0) {

                       _previousDecodedFrameTime = currentTime;
                       // Is this a packet from the video stream?
                       if(_packetFFmpeg.stream_index==_videoStream) {
                           // Decode video frame
                           avcodec_decode_video2(_codecCtx, _frame, &frameFinished,
                                                 &_packetFFmpeg);

                           // Did we get a video frame?
                           if(frameFinished) {
                               // create a frame object and call the block;
                               AVFrameData *frameData = [self createFrameData:_frame trimPadding:YES];
                               frameCallbackBlock(frameData);
                           }

                           // Free the packet that was allocated by av_read_frame
                           av_free_packet(&_packetFFmpeg);

                       } else if (_packetFFmpeg.stream_index==audioStream) {

                           // NSLog(@"audio stream");
                           [audioPacketQueueLock lock];

                           audioPacketQueueSize += _packetFFmpeg.size;
                           [audioPacketQueue addObject:[NSMutableData dataWithBytes:&_packetFFmpeg length:sizeof(_packetFFmpeg)]];

                           [audioPacketQueueLock unlock];

                           if (!primed) {
                               primed=YES;
                               [_audioController _startAudio];
                           }

                           if (emptyAudioBuffer) {
                               [_audioController enqueueBuffer:emptyAudioBuffer];
                           }

                           //av_free_packet(&_packetFFmpeg);

                       } else {

                           // Free the packet that was allocated by av_read_frame
                           av_free_packet(&_packetFFmpeg);
                       }


                   } else{
                       usleep(1000);
                   }
               }
           }
           completion();
       });
       return 0;
    }

    Enqueue Buffer in AudioStreamer :

    - (OSStatus)enqueueBuffer:(AudioQueueBufferRef)buffer
    {
       OSStatus status = noErr;

       if (buffer) {
           AudioTimeStamp bufferStartTime;
           buffer->mAudioDataByteSize = 0;
           buffer->mPacketDescriptionCount = 0;

           if (_streamer.audioPacketQueue.count <= 0) {
               _streamer.emptyAudioBuffer = buffer;
               return status;
           }

           _streamer.emptyAudioBuffer = nil;

           while (_streamer.audioPacketQueue.count && buffer->mPacketDescriptionCount < buffer->mPacketDescriptionCapacity) {
               AVPacket *packet = [_streamer readPacket];

               if (buffer->mAudioDataBytesCapacity - buffer->mAudioDataByteSize >= packet->size) {
                   if (buffer->mPacketDescriptionCount == 0) {
                       bufferStartTime.mSampleTime = packet->dts * _audioCodecContext->frame_size;
                       bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
                   }

                   memcpy((uint8_t *)buffer->mAudioData + buffer->mAudioDataByteSize, packet->data, packet->size);
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = packet->size;
                   buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _audioCodecContext->frame_size;

                   buffer->mAudioDataByteSize += packet->size;
                   buffer->mPacketDescriptionCount++;


                   _streamer.audioPacketQueueSize -= packet->size;

                   av_free_packet(packet);
               }
               else {

                   //av_free_packet(packet);
                   break;
               }
           }

           [decodeLock_ lock];
           if (buffer->mPacketDescriptionCount > 0) {
               status = AudioQueueEnqueueBuffer(audioQueue_, buffer, 0, NULL);
               if (status != noErr) {
                   NSLog(@"Could not enqueue buffer.");
               }
           } else {
               AudioQueueStop(audioQueue_, NO);
               finished_ = YES;
           }

           [decodeLock_ unlock];
       }

       return status;
    }

    Read packet in ffmpegwrapper :

    - (AVPacket*)readPacket
    {
       if (_currentPacket.size > 0 || _inBuffer) return &_currentPacket;

       NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
       _packet = [packetData mutableBytes];

       if (_packet) {
           if (_packet->dts != AV_NOPTS_VALUE) {
               _packet->dts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
           }

           if (_packet->pts != AV_NOPTS_VALUE) {
               _packet->pts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
           }

           [audioPacketQueueLock lock];
           audioPacketQueueSize -= _packet->size;
           if ([audioPacketQueue count] > 0) {
               [audioPacketQueue removeObjectAtIndex:0];
           }
           [audioPacketQueueLock unlock];

           _currentPacket = *(_packet);
       }

       return &_currentPacket;
    }
  • Keep trying a command until it returns "True" and then execute another

    6 janvier 2023, par Tyrone Hirt

    I'm trying to make a script to check the processor usage for a specific process every 10 seconds, and when the usage is less than 2% I want another 2 commands to be executed.

    


    The purpose is to know when the program has finished processing the requests, in order to release the execution of the other commands.

    


    I created this script to check the processor usage by this application :

    


    SET ProcessorUsage = wmic path Win32_PerfFormattedData_PerfProc_Process get Name,PercentProcessorTime | findstr /i /c:RenderQuery

%ProcessorUsage%


    


    And these are the commands I want to be executed when the processor usage of the RenderQuery application is less than 2% :

    


    for /f "delims=" %%X in ('dir /s/b/ad Proxy') do for /f "delims=" %%Y in ('dir /s/b/a-d "%%X"') do move "%%Y" ".\03. Proxy"

for /f "delims=" %%i in ('dir /s/b/ad Proxy') do rd "%%i"


    


    I tried to create a script that way here :

    


    SET ProcessorUsage = wmic path Win32_PerfFormattedData_PerfProc_Process get Name,PercentProcessorTime | findstr /i /c:RenderQuery
:Loop
IF %ProcessorUsage% LSS 2 (
(for /f "delims=" %%X in ('dir /s/b/ad Proxy') do for /f "delims=" %%Y in ('dir /s/b/a-d "%%X"') do move "%%Y" ".\03. Proxy") && (for /f "delims=" %%i in ('dir /s/b/ad Proxy') do rd "%%i")
) ELSE (
sleep 10 && goto Loop
)


    


    I also tried this way here :

    


    SET ProcessorUsage = wmic path Win32_PerfFormattedData_PerfProc_Process get Name,PercentProcessorTime | findstr /i /c:RenderQuery

:Loop
for %ProcessorUsage% LSS 2 do (
(for /f "delims=" %%X in ('dir /s/b/ad Proxy') do for /f "delims=" %%Y in ('dir /s/b/a-d "%%X"') do move "%%Y" ".\03. Proxy") && (for /f "delims=" %%i in ('dir /s/b/ad Proxy') do rd "%%i") || (sleep 10 && goto Loop)
)


    


    With these scripts I tried to create the window that only blinks and closes right away...

    


    What's the best way to do this ?

    


    EDIT

    


    Explaining in more detail : I work with video production, so I constantly need to render Proxy files, which are video files with low quality to be used during my video editing and replaced at the end of editing, this makes the much smoother video editing.

    


    Having said that, I have a folder model, inside this folder model there is a folder where I always download the video files from the camera and in that folder there is always a .bat file that opens all the video files in the software that generates proxy files of the camera's video files.

    


    This .bat file has this exact code :

    


    start "" "C:\Users\User\Downloads\FFmpeg_Batch_AV_Converter_Portable_2.8.4_x64\FFBatch.exe" -f "%~dp0\"


    


    When this software opens, it automatically renders the proxy files and their output is always in a child folder of the original files folder, and the name of the folder is Proxy.

    


    The issue is that I don't want them to be in several separate Proxy folders, so I created another .bat file that is in the parent folder of all video files, this script contains exactly these lines :

    


    for /f "delims=" %%X in ('dir /s/b/ad Proxy') do for /f "delims=" %%Y in ('dir /s/b/a-d "%%X"') do move "%%Y" ".\03. Proxy"

for /f "delims=" %%i in ('dir /s/b/ad Proxy') do rd "%%i"


    


    That is, it only searches recursively for files that are inside folders named Proxy, then it moves these files to the folder 03. Proxy that is inside the parent folder.

    


    The second line looks for all proxy folders (which are now empty) and deletes them.

    


    The point is : I currently run the second script manually, as soon as the render finishes, and I would like it to run automatically.

    


    Given this, I thought of adding a line in the first script, which opens the video files in the rendering program, this line would call the second script in the background, and the second script would be analyzing the CPU usage of this application every 10 seconds, and when the usage is less than 2% (in theory there is nothing else rendering, since it has a low CPU usage) it executes the lines that move the files and remove the folders.

    


    I think there's a good change for this to work, because this software renders 4 videos at a time, and this means that there is no time between stopping rendering a video and starting another... the CPU usage is always very high until all the videos are finished, so I think this would be the best signal to release the other commands.