Recherche avancée

Médias (91)

Autres articles (26)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (4424)

  • how to apostrophe with os.system in ffmpeg drawtext in python

    28 septembre 2023, par Ishu singh

    I just want to execute this code with os.system('command') in ffmpeg drawtext() but unable to execute it just because of ' (apostrophe) , it fails

    


    The code goes here ->

    


    the \f is working as \n but I'm using that for seprating word

    


    from PIL import ImageFont
import os

def create_lines(longline, start, end, fontsize=75, fontfile='OpenSansCondensedBold.ttf'):

    fit = fit_text(longline, 700, fontfile)

    texts = []
    now = 0
    # breaking line on basis of '\f'
    for wordIndex in range(len(fit)):
        if fit[wordIndex] == '\f' or wordIndex == len(fit)-1:
            texts.append(fit[now:wordIndex+1].strip('\f'))
            now = wordIndex

    # adding multiple lines to video
    string = ''
    count = 0
    for line in texts:
        string += f''',drawtext=fontfile={fontfile}:fontsize={fontsize}:text='{line[enter image description here](https://i.stack.imgur.com/iuceq.png)}':fontcolor=black:bordercolor=white:borderw=4:x=(w-text_w)/2:y=(h-text_h)/2-100+{count}:'enable=between(t,{start},{end})' '''
        count += 100

    print(string)
    return string

def createVideo(content):
    input_video = 'video.mp4'
    output_video = 'output.mp4'
    font_file = 'BebasKai.ttf'
    text_file = 'OpenSansCondensedBold.ttf'
    font_size = 75
    font_color = 'white'

    part1 = create_lines(content[1], 0.5, 7)
    part2 = create_lines(content[2], 7.5, 10)

    os.system(
        f"""ffmpeg -i {} -vf "drawtext=fontfile={font_file}:fontsize={95}:text={content[0]}:fontcolor={font_color}:box=1:boxcolor=black@0.9:boxborderw=20:x=(w-text_w)/2:y=(h-text_h)/4-100{str(part1)}{str(part2)}" -c:v libx264 -c:a aac -t 10 {output_video} -y""")

my_text =['The Brain', "Your brain can't multitask effectively", "Multitasking is a myth,  it's just rapid switching between tasks"]

createVideo(my_text)


    


    enter image description here

    


    what I want is that, I would able to execute this correctly

    


  • decoding live stream using ffmpeg python [closed]

    18 novembre 2023, par Anurag Mishra

    i am trying to decode h264 video stream using ffmpeg python . My code is decoding and displaying 1st frame (key-frame) but for subsequent frames it gives exception :" Error opening input : Invalid data found when processing input

    


    Error opening input file pipe:0.

    


    Error opening input files : Invalid data found when processing input "

    


    here is my code to receive encoded data and decoding :

    


    import socket
import struct
import cv2
import numpy as np
import os
import subprocess
import ffmpeg
import select


def display_h264_from_bytes(h264_data,sps,pps):
    ffmpeg_cmd = [
        'ffmpeg',
        '-probesize', '1000000',  # Increase probesize for more accurate analysis
        '-analyzeduration', '10000',
        '-c:v', 'h264',
        '-i', 'pipe:0',
        '-c:v', 'mjpeg',
        '-f', 'image2pipe',
        'pipe:1'
    ]


    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)

    try:
        print("sps is : ",sps)
        print("pps is : ",pps)
        # Provide SPS and PPS along with the H.264 data to the FFmpeg subprocess
        input_data = sps + pps + h264_data
        output, _ = process.communicate(input=input_data)
        nparr = np.frombuffer(output, dtype=np.uint8)
        frame = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
        cv2.imshow('H.264 Video', frame)
        print("frame has been displayed :",frame)
        cv2.waitKey(1)
        print("after wait key")
        #cv2.destroyAllWindows()
        print("destroyed windows")
    except Exception as e:
        print("exception is ", e)


def extract_headers(h264_data,sequence_number):
    print("sequence_number  :: ",sequence_number)
    headers = {'SPS': b'', 'PPS': b''}
    start = 0
    end = 0  # Initialize end variable

    while start < len(h264_data):
        start_code = h264_data.find(b'\x00\x00\x00\x01', start)
        if start_code == -1:
            break

        end = h264_data.find(b'\x00\x00\x00\x01', start_code + 1)
        if end == -1:
            end = len(h264_data)

        nal_type = h264_data[start_code + 4] & 0x1F
        if nal_type == 7:  # SPS NAL unit
            print("SPS NAL unit")
            headers['SPS'] = h264_data[start_code:end]
        elif nal_type == 8:  # PPS NAL unit
            print("PPS NAL unit")
            headers['PPS'] = h264_data[start_code:end]
        elif nal_type == 5:
            print("IDR Frame (I-frame)")
        elif nal_type == 1:
            print("Non-IDR Frame (P-frame)")
        elif nal_type == 6:
            print("SEI NAL Unit")
        elif nal_type == 9:
            print("AUD NAL Unit")


        start = end

    return headers

def receive_image_data():
    # Configuration
    server_host = '0.0.0.0'  # Listen on all available network interfaces
    server_port = 33333  # Replace with the desired port number
    # Specify the output directory for JPEG images

    # Socket configuration
    server_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    server_socket.bind((server_host, server_port))

    # Timeout for incomplete sequences (in seconds)
    sequence_timeout = 30
    SPS = ""
    PPS = ""
    # Variables to keep track of received chunks
    sequence_number = -1
    received_chunks = {}


    try:
#        print("udp server starting")
        while True:
            data, client_address = server_socket.recvfrom(65536)  # Adjust the buffer size as needed
            header_size = 12  # Size of the header (3 integers, each 4 bytes)

            # Unpack the header
            sequence_number, chunk_number, total_chunks = struct.unpack('!III', data[:header_size])

            # Extract the data part (chunk) from the received packet
            chunk_data = data[header_size:]

            if sequence_number not in received_chunks:
                received_chunks[sequence_number] = {}

            received_chunks[sequence_number][chunk_number] = chunk_data

            # Check if we have received all chunks for the current sequence
            if len(received_chunks[sequence_number]) == total_chunks:
                # Reconstruct the complete data from received chunks
                complete_data = b''.join(received_chunks[sequence_number][i] for i in range(total_chunks))
                print("complete data is : ",complete_data)
                if sequence_number == 1:
                    try:
                        extracted_headers = extract_headers(complete_data,sequence_number)
                        SPS = extracted_headers['SPS']
                        PPS = extracted_headers['PPS']
                    except Exception as e:
                        print("exception is : ",e)
                else:
                    display_h264_from_bytes(complete_data,SPS,PPS)

    except Exception as e:
        print("exception is as eee : ",e)

    finally:
        # Release resources
        server_socket.close()
        cv2.destroyAllWindows()

if __name__ == '__main__':
    receive_image_data()


    


  • memory leak reading video frames to numpy array using ffmpeg as a python subprocess

    19 novembre 2023, par paddyg

    I can stream videos frame by frame to an OpenGL Texture2D OK in python (pi3d module, example in pi3d_demos/VideoWalk.py) but I've noticed that it gradually leaks memory. Below is a stripped down version of the code that shows the problem.

    


    Can anyone see where I'm leaking ? The memory seems to be recovered when python stops. I've tried explicitly setting things to None or calling the garbage collector manually.

    


    #!/usr/bin/python
import os
import numpy as np
import subprocess
import threading
import time
import json

def get_dimensions(video_path):
    probe_cmd = f'ffprobe -v error -show_entries stream=width,height,avg_frame_rate -of json "{video_path}"'
    probe_result = subprocess.check_output(probe_cmd, shell=True, text=True)
    video_info_list = [vinfo for vinfo in json.loads(probe_result)['streams'] if 'width' in vinfo]
    if len(video_info_list) > 0:
        video_info = video_info_list[0] # use first if more than one!
        return(video_info['width'], video_info['height'])
    else:
        return None

class VideoStreamer:
    def __init__(self, video_path):
        self.flag = False # use to signal new texture
        self.kill_thread = False
        self.command = [ 'ffmpeg', '-i', video_path, '-f', 'image2pipe',
                        '-pix_fmt', 'rgb24', '-vcodec', 'rawvideo', '-']
        dimensions = get_dimensions(video_path)
        if dimensions is not None:
            (self.W, self.H) = dimensions
            self.P = 3
            self.image = np.zeros((self.H, self.W, self.P), dtype='uint8')
            self.t = threading.Thread(target=self.pipe_thread)
            self.t.start()
        else: # couldn't get dimensions for some reason - assume not able to read video
            self.W = 240
            self.H = 180
            self.P = 3
            self.image = np.zeros((self.H, self.W, self.P), dtype='uint8')
            self.t = None

    def pipe_thread(self):
        pipe = None
        while not self.kill_thread:
            st_tm = time.time()
            if pipe is None:
                pipe = subprocess.Popen(self.command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=-1)
            self.image = np.frombuffer(pipe.stdout.read(self.H * self.W * self.P), dtype='uint8') # overwrite array
            pipe.stdout.flush() # presumably nothing else has arrived since read()
            pipe.stderr.flush() # ffmpeg sends commentary to stderr
            if len(self.image) < self.H * self.W * self.P: # end of video, reload
                pipe.terminate()
                pipe = None
            else:
                self.image.shape = (self.H, self.W, self.P)
                self.flag = True
            step = time.time() - st_tm
            time.sleep(max(0.04 - step, 0.0)) # adding fps info to ffmpeg doesn't seem to have any effect
        if pipe is not None:
            pipe.terminate()
            pipe = None

    def kill(self):
        self.kill_thread = True
        if self.t is not None:
            self.t.join()

vs = None
try:
    while True:
        for (path, _, videos) in os.walk("/home/patrick/Pictures/videos"):
            for video in videos:
                print(video)
                os.system("free") # shows gradually declining memory available
                vs = VideoStreamer(os.path.join(path, video))
                for i in range(500):
                    tries = 0
                    while not vs.flag and tries < 5:
                        time.sleep(0.001)
                        tries += 1
                    # at this point vs.image is a numpy array HxWxP bytes
                    vs.flag = False
                vs.kill()
except KeyboardInterrupt:
    if vs is not None:
        vs.kill()


os.system("free")