Recherche avancée

Médias (0)

Mot : - Tags -/api

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (109)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (14986)

  • Gstreamer video increases latency with decreased FPS

    19 novembre 2024, par Ri Di

    I am using RPI 5 to stream the video :

    


    rpicam-vid -t 0 --camera 0 --nopreview --mode 2304:1296:10:P --codec yuv420 
           --width 640 --height 360 --framerate 10 --rotation 0 
           --autofocus-mode manual --inline --listen -o - | 
     ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 640x360 -r 10 -i /dev/stdin 
            -c:v libx264 -preset ultrafast -tune zerolatency -maxrate 300k 
            -bufsize 50k -g 30000 -f mpegts tcp://192.168.0.147:1234


    


    View it with :

    


    gst-launch-1.0 -v tcpserversrc host=0.0.0.0 port=1234 ! queue ! 
    tsdemux ! h264parse ! avdec_h264 ! videorate ! video/x-raw,framerate=10/1 !  
    videoconvert ! autovideosink sync=false


    


    Problem is that with 10 FPS I get around 2s of latency ! While 56 or 120 FPS results in below 300ms latency.

    


    Is the problem in sender or reader side ? Or both ?

    


    I am not planning to use the 10 FPS, its only for demonstration of problem. But I would like to get lower latency at 56 FPS - just like at 120 FPS (around 80-100 ms difference) or maybe even better, as it seems to get lower with higher FPS.

    


    Maybe there is some kind of buffering parameter which holds frames ?

    


    (of course, when testing with higher FPS I change both numbers in sender and the one in reader command. The camera is v3 RPI official)

    


    Also I'd like to mention that same thing happens with ffplay :

    


    ffplay -i -probesize 3000 tcp://0.0.0.0:1234/?listen


    


  • Gstreamer video increases latency with decresed FPS

    19 novembre 2024, par Ri Di

    I am using RPI 5 to stream the video :

    


    rpicam-vid -t 0 --camera 0 --nopreview --mode 2304:1296:10:P --codec yuv420 --width 640 --height 360 --framerate 10 --rotation 0 --autofocus-mode manual --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 640x360 -r 10 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -maxrate 300k -bufsize 50k -g 30000 -f mpegts tcp://192.168.0.147:1234


    


    View it with :

    


    gst-launch-1.0 -v tcpserversrc host=0.0.0.0 port=1234 ! queue ! tsdemux ! h264parse ! avdec_h264 ! videorate ! video/x-raw,framerate=10/1 ! videoconvert ! autovideosink sync=false


    


    Problem is that with 10 FPS I get around 2s of latency ! While 56 or 120 FPS results in below 300ms latency.

    


    Is the problem in sender or reader side ? Or both ?

    


    I am not planning to use the 10 FPS, its only for demonstration of problem. But I would like to get lower latency at 56 FPS - just like at 120 FPS (around 80-100 ms difference) or maybe even better, as it seems to get lower with higher FPS.

    


    Maybe there is some kind of buffering parameter which holds frames ?

    


    (of course, when testing with higher FPS I change both numbers in sender and the one in reader command. The camera is v3 RPI official)

    


    Also I'd like to mention that same thing happens with ffplay :

    


    ffplay -i -probesize 3000 tcp://0.0.0.0:1234/?listen


    


  • Queue in Python processing more than one video at a time ? [closed]

    12 novembre 2024, par Mateus Coelho

    I have an raspberry pi, that i proccess videos, rotate and put 4 water marks, but, when i run into the raspberry pi, it uses 100% of 4CPUS threads and it reboots. I solved this using -threads 1, to prevent the usage of just one of the 4 CPUS cores, it worked.

    


    I made a Queue to procces one at a time, because i have 4 buttons that trigger the videos. But, when i send more then 3 videos to the Queue, the rasp still reboots, and im monitoring the CPU usage, is 100% for only one of the four CPUS
enter image description here

    


    But, if i send 4 or 5 videos to the thread folder, it completly reboots, and the most awkward, its after the reboot, it made its way to proceed all the videos.

    


    
import os
import time
import subprocess
from google.cloud import storage
import shutil

QUEUE_DIR = "/home/abidu/Desktop/ApertaiRemoteClone"
ERROR_VIDEOS_DIR = "/home/abidu/Desktop/ApertaiRemoteClone/ErrorVideos"
CREDENTIALS_PATH = "/home/abidu/Desktop/keys.json"
BUCKET_NAME = "videos-283812"

def is_valid_video(file_path):
    try:
        result = subprocess.run(
            ['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', file_path],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE
        )
        return result.returncode == 0
    except Exception as e:
        print(f"Erro ao verificar o vídeo: {e}")
        return False

def overlay_images_on_video(input_file, image_files, output_file, positions, image_size=(100, 100), opacity=0.7):
    inputs = ['-i', input_file]
    for image in image_files:
        if image:
            inputs += ['-i', image]
    filter_complex = "[0:v]transpose=2[rotated];"
    current_stream = "[rotated]"
    for i, (x_offset, y_offset) in enumerate(positions):
        filter_complex += f"[{i+1}:v]scale={image_size[0]}:{image_size[1]},format=rgba,colorchannelmixer=aa={opacity}[img{i}];"
        filter_complex += f"{current_stream}[img{i}]overlay={x_offset}:{y_offset}"
        if i < len(positions) - 1:
            filter_complex += f"[tmp{i}];"
            current_stream = f"[tmp{i}]"
        else:
            filter_complex += ""
    command = ['ffmpeg', '-y', '-threads', '1'] + inputs + ['-filter_complex', filter_complex, '-threads', '1', output_file]

    try:
        result = subprocess.run(command, check=True)
        result.check_returncode()  # Verifica se o comando foi executado com sucesso
        print(f"Vídeo processado com sucesso: {output_file}")
    except subprocess.CalledProcessError as e:
        print(f"Erro ao processar o vídeo: {e}")
        if "moov atom not found" in str(e):
            print("Vídeo corrompido ou sem o moov atom. Pulando o arquivo.")
        raise  # Relança a exceção para ser tratada no nível superior

def process_and_upload_video():
    client = storage.Client.from_service_account_json(CREDENTIALS_PATH)
    bucket = client.bucket(BUCKET_NAME)
    
    while True:
        # Aguarda 10 segundos antes de verificar novos vídeos
        time.sleep(10)

        # Verifica se há arquivos no diretório de fila
        queue_files = [f for f in os.listdir(QUEUE_DIR) if f.endswith(".mp4")]
        
        if queue_files:
            video_file = os.path.join(QUEUE_DIR, queue_files[0])  # Pega o primeiro vídeo na fila
            
            # Define o caminho de saída após o processamento com o mesmo nome do arquivo de entrada
            output_file = os.path.join(QUEUE_DIR, "processed_" + os.path.basename(video_file))
            if not is_valid_video(video_file):
                print(f"Arquivo de vídeo inválido ou corrompido: {video_file}. Pulando.")
                os.remove(video_file)  # Remove arquivo corrompido
                continue

            # Processa o vídeo com a função overlay_images_on_video
            try:
                overlay_images_on_video(
                    video_file,
                    ["/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image1.png", 
                     "/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image2.png", 
                     "/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image3.png", 
                     "/home/abidu/Desktop/ApertaiRemoteClone/Sponsor/image4.png"],
                    output_file,
                    [(10, 10), (35, 1630), (800, 1630), (790, 15)],
                    image_size=(250, 250),
                    opacity=0.8
                )
                
                if os.path.exists(output_file):
                    blob = bucket.blob(os.path.basename(video_file).replace("-", "/"))
                    blob.upload_from_filename(output_file, content_type='application/octet-stream')
                    print(f"Uploaded {output_file} to {BUCKET_NAME}")
                    os.remove(video_file)
                    os.remove(output_file)
                    print(f"Processed and deleted {video_file} and {output_file}.")
            
            except subprocess.CalledProcessError as e:
                print(f"Erro ao processar {video_file}: {e}")
                
                move_error_video_to_error_directory(video_file)

                continue  # Move para o próximo vídeo na fila após erro

def move_error_video_to_error_directory(video_file):
    print(f"Movendo arquivo de vídeo com erro {video_file} para {ERROR_VIDEOS_DIR}")

    if not os.path.exists(ERROR_VIDEOS_DIR):
        os.makedirs(ERROR_VIDEOS_DIR)
                
    shutil.move(video_file, ERROR_VIDEOS_DIR)

if __name__ == "__main__":
    process_and_upload_video()