Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (65)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (9323)

  • `ffmpeg -f concat` don't work when all input streams appear to have the same spec

    2 octobre 2024, par Roy

    My ffmpeg command :

    


    ffmpeg -safe 0 -f concat -i list.txt -c copy out.mp4


    


    My 1st input file :

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'D:\Applications\ffmpeg_6.0_full\a.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf60.3.100
  Duration: 00:00:04.97, start: 0.000000, bitrate: 40 kb/s
  Stream #0:0[0x1](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]
  Stream #0:1[0x2](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 27 kb/s, 30 fps, 30 tbr, 30k tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.3.100 libx264


    


    My 2nd input file :

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'D:\Applications\ffmpeg_6.0_full\b.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp41isom
    creation_time   : 2023-03-08T06:47:13.000000Z
    artist          : Microsoft Game DVR
    title           : PUBG: BATTLEGROUNDS
  Duration: 00:10:00.16, start: 0.000000, bitrate: 20885 kb/s
  Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 20739 kb/s, 30 fps, 30 tbr, 30k tbn (default)
    Metadata:
      creation_time   : 2023-03-08T06:47:13.000000Z
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
      encoder         : AVC Coding
  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 131 kb/s (default)
    Metadata:
      creation_time   : 2023-03-08T06:47:13.000000Z
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]


    


    The above command outputs some warning signals :

    


    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000025239902d40] Auto-inserting h264_mp4toannexb bitstream filter
[mp4 @ 00000252396fe5c0] Non-monotonous DTS in output stream 0:1; previous: 218112, current: 150024; changing to 218113. This may result in incorrect timestamps in the output file.
...
a lot of them
...
frame=25992 fps=21754 q=-1.0 Lsize= 1519621kB time=00:14:49.39 bitrate=13996.8kbits/s speed= 744x
video:9649kB audio:1519216kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown


    


    The resultant video can play the first part of the video correctly, then the video players either skips directly to the end of the video (MPC-HC), or don't render anything at all while timer passes as normal (VLC).

    


    My impression of the concat is that it requires all videos to have the same spec, which I think my input achieved (all the "Steam #0:0", etc, line matches). I only see the following difference, which I assumed that should be okay :

    


      

    1. Metadata are different both for the whole input (e.g. "major_brand") and for each stream (e.g. "encoder"). I assumed that metadata won't affect the processing.
    2. 


    3. The order of video/audio streams are different in the two inputs : the 1st input file has audio then video ; the 2nd input file has video then audio. I assumed that ffmpeg knows the difference and won't concat a video stream to an audio stream.
    4. 


    


    The full output of the command can be found in this pastebin : https://pastebin.com/Z5q97Uyg

    


  • Queue in Python processing more than one video at a time ? [closed]

    12 novembre 2024, par Mateus Coelho

    I have an raspberry pi, that i proccess videos, rotate and put 4 water marks, but, when i run into the raspberry pi, it uses 100% of 4CPUS threads and it reboots. I solved this using -threads 1, to prevent the usage of just one of the 4 CPUS cores, it worked.

    


    I made a Queue to procces one at a time, because i have 4 buttons that trigger the videos. But, when i send more then 3 videos to the Queue, the rasp still reboots, and im monitoring the CPU usage, is 100% for only one of the four CPUS
enter image description here

    


    But, if i send 4 or 5 videos to the thread folder, it completly reboots, and the most awkward, its after the reboot, it made its way to proceed all the videos.

    


    
import os
import time
import subprocess
from google.cloud import storage
import shutil

QUEUE_DIR = "/home/abidu/Desktop/ApertaiRemoteClone"
ERROR_VIDEOS_DIR = "/home/abidu/Desktop/ApertaiRemoteClone/ErrorVideos"
CREDENTIALS_PATH = "/home/abidu/Desktop/keys.json"
BUCKET_NAME = "videos-283812"

def is_valid_video(file_path):
    try:
        result = subprocess.run(
            ['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', file_path],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE
        )
        return result.returncode == 0
    except Exception as e:
        print(f"Erro ao verificar o vídeo: {e}")
        return False

def overlay_images_on_video(input_file, image_files, output_file, positions, image_size=(100, 100), opacity=0.7):
    inputs = ['-i', input_file]
    for image in image_files:
        if image:
            inputs += ['-i', image]
    filter_complex = "[0:v]transpose=2[rotated];"
    current_stream = "[rotated]"
    for i, (x_offset, y_offset) in enumerate(positions):
        filter_complex += f"[{i+1}:v]scale={image_size[0]}:{image_size[1]},format=rgba,colorchannelmixer=aa={opacity}[img{i}];"
        filter_complex += f"{current_stream}[img{i}]overlay={x_offset}:{y_offset}"
        if i < len(positions) - 1:
            filter_complex += f"[tmp{i}];"
            current_stream = f"[tmp{i}]"
        else:
            filter_complex += ""
    command = ['ffmpeg', '-y', '-threads', '1'] + inputs + ['-filter_complex', filter_complex, '-threads', '1', output_file]

    try:
        result = subprocess.run(command, check=True)
        result.check_returncode()  # Verifica se o comando foi executado com sucesso
        print(f"Vídeo processado com sucesso: {output_file}")
    except subprocess.CalledProcessError as e:
        print(f"Erro ao processar o vídeo: {e}")
        if "moov atom not found" in str(e):
            print("Vídeo corrompido ou sem o moov atom. Pulando o arquivo.")
        raise  # Relança a exceção para ser tratada no nível superior

def process_and_upload_video():
    client = storage.Client.from_service_account_json(CREDENTIALS_PATH)
    bucket = client.bucket(BUCKET_NAME)
    
    while True:
        # Aguarda 10 segundos antes de verificar novos vídeos
        time.sleep(10)

        # Verifica se há arquivos no diretório de fila
        queue_files = [f for f in os.listdir(QUEUE_DIR) if f.endswith(".mp4")]
        
        if queue_files:
            video_file = os.path.join(QUEUE_DIR, queue_files[0])  # Pega o primeiro vídeo na fila
            
            # Define o caminho de saída após o processamento com o mesmo nome do arquivo de entrada
            output_file = os.path.join(QUEUE_DIR, "processed_" + os.path.basename(video_file))
            if not is_valid_video(video_file):
                print(f"Arquivo de vídeo inválido ou corrompido: {video_file}. Pulando.")
                os.remove(video_file)  # Remove arquivo corrompido
                continue

            # Processa o vídeo com a função overlay_images_on_video
            try:
                overlay_images_on_video(
                    video_file,
                    ["/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image1.png", 
                     "/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image2.png", 
                     "/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image3.png", 
                     "/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image4.png"],
                    output_file,
                    [(10, 10), (35, 1630), (800, 1630), (790, 15)],
                    image_size=(250, 250),
                    opacity=0.8
                )
                
                if os.path.exists(output_file):
                    blob = bucket.blob(os.path.basename(video_file).replace("-", "/"))
                    blob.upload_from_filename(output_file, content_type='application/octet-stream')
                    print(f"Uploaded {output_file} to {BUCKET_NAME}")
                    os.remove(video_file)
                    os.remove(output_file)
                    print(f"Processed and deleted {video_file} and {output_file}.")
            
            except subprocess.CalledProcessError as e:
                print(f"Erro ao processar {video_file}: {e}")
                
                move_error_video_to_error_directory(video_file)

                continue  # Move para o próximo vídeo na fila após erro

def move_error_video_to_error_directory(video_file):
    print(f"Movendo arquivo de vídeo com erro {video_file} para {ERROR_VIDEOS_DIR}")

    if not os.path.exists(ERROR_VIDEOS_DIR):
        os.makedirs(ERROR_VIDEOS_DIR)
                
    shutil.move(video_file, ERROR_VIDEOS_DIR)

if __name__ == "__main__":
    process_and_upload_video()



    


  • Android JavaCV FFmpeg webstream to local static website

    26 mars 2017, par Thomas Devoogdt

    For my integrated test I’m working on an application that needs to provide a live stream to a locally hosted website. I’ve already built a working site that run’s on nanohttpd. This application performs also special image processing. Therefore I use JavaCV. The library is working perfectly and all cpp bindings are working too.

    My question : How to set up a live stream that can directly be played in a static site hosted by nanohttpd ? - I am on the right way ?

    My code :

    init :

    private void initLiveStream() throws FrameRecorder.Exception {
       /* ~~~ https://github.com/bytedeco/javacv/issues/598 ~~~ */
       frameRecorder = new FFmpegFrameRecorder("http://localhost:9090", imageWidth, imageHeight, 0);
       frameRecorder.setVideoOption("preset", "ultrafast");
       frameRecorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
       frameRecorder.setAudioCodec(0);
       frameRecorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
       frameRecorder.setFormat("webm");
       frameRecorder.setGopSize(10);
       frameRecorder.setFrameRate(frameRate);
       frameRecorder.setVideoBitrate(5000);
       frameRecorder.setOption("content_type","video/webm");
       frameRecorder.setOption("listen", "1");
       frameRecorder.start();
    }

    In my CameraView :

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
       Camera.Size size = camera.getParameters().getPreviewSize();
       Frame frame = new AndroidFrameConverter().convert(data, size.width, size.height);
       try {
            if(frameRecorder!=null){
                frameRecorder.record(frame);
            }
        } catch (FrameRecorder.Exception e) {
            e.printStackTrace();
        }
    }

    Here is one of the stack traces that ar shown frequently in my search to the solution :

    org.bytedeco.javacv.FrameRecorder$Exception: avio_open error() error -111: Could not open 'http://localhost:9090'

    I couldn’t find any other thread addressing this specific issue.

    Thanks in advance

    EDIT

    Thanks to Chester Cobus, Here is my used code :

    Websocket :

    //Constructor
    AsyncHttpServer serverStream = new AsyncHttpServer();
    List<websocket> sockets = new ArrayList&lt;>();

    //http://stackoverflow.com/a/33021907/5500092
    //I'm planning to use more sockets. This is the only uniform expression I found.
    serverStream.websocket("/((?:[^/]*/)*)(.*)", new AsyncHttpServer.WebSocketRequestCallback() {
        @Override
        public void onConnected(final WebSocket webSocket, AsyncHttpServerRequest request) {
            String uri = request.getPath();
            if (uri.equals("/live")) {
                sockets.add(webSocket);

                //Use this to clean up any references to your websocket
                webSocket.setClosedCallback(new CompletedCallback() {
                    @Override
                    public void onCompleted(Exception ex) {
                        try {
                            if (ex != null)
                                Log.e("WebSocket", "Error");
                        } finally {
                            sockets.remove(webSocket);
                        }
                    }
                });
            }
        }
    });

    //Updater (Observer pattern)
    @Override
    public void updated(byte[] data) {
       for (WebSocket socket : sockets) {
            socket.write(new ByteBufferList(data));
       }
    }
    </websocket>

    Record Acitivy

    private long start_time = System.currentTimeMillis();

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
       long now_time = System.currentTimeMillis();
       if ((now_time - start_time) > 250) {
           start_time = now_time;
           //https://forums.xamarin.com/discussion/40991/onpreviewframe-issue-converting-preview-byte-to-android-graphics-bitmap
           Camera.Size size = camera.getParameters().getPreviewSize();
           YuvImage image = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
           ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
           image.compressToJpeg(new Rect(0, 0, size.width, size.height), 60, byteArrayOutputStream);
           MainActivity.getWebStreamer().updated(byteArrayOutputStream.toByteArray());
       }
    }

    JavaScript

    var socket;
    var imageElement;

    /**
    * path - String.Format("ws://{0}:8090/live", Window.Location.HostName)
    * image - HTMLImageElement
    */
    function imageStreamer(path, image) {
       imageElement = image;
       socket = new WebSocket(path);

       socket.onmessage = function(msg) {
           var arrayBuffer = msg.data;
           var reader = new FileReader();
           reader.onload = function(e) {
               imageElement.src = e.target.result;
           };
           reader.readAsDataURL(arrayBuffer);
       };
    }