Recherche avancée

Médias (0)

Mot : - Tags -/logo

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (96)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Participer à sa documentation

    10 avril 2011

    La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
    Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
    Pour ce faire, vous pouvez vous inscrire sur (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (8251)

  • Why can't I set "aspect ratio" by ffmpeg -aspect ? [on hold]

    7 août 2014, par yasi

    I want to generate an h264 (avc) file by a jpg file. I generated it by command :

    • ffmpeg -f image2 -i img%d.jpg -vcodec libx264 sample.h264
    • mv avatar.h264 sample-320.avc

    the img%d.jpg is 0.jpg, 1.jpg, 2.jpg ... 24.jpg, all of which is the same, resolution is 320x180

    When sample.avc is generated, I checked information of it by ESEyE, it’s like below.
    enter image description here

    As you can see, the resolution is correct, but aspect ratio is weird since original jpg sequence is 320x180, whose aspect ratio is 16:9. So, I tried to keep 16:9 by the command below

    • ffmpeg -r 25 -f image2 -i img%d.jpg -vcodec libx264 -profile:v baseline -level:v 1.1 -aspect "16:9" sample.h264

    However, it doesn’t work, the generated avc aspect ratio is still 5x3.

    How can I keep aspect ratio of 16:9 ?

    P.S.
    I guess h264 miminal block is 16x16, and avc width could be padded as 192 (16 * 12), so aspect ratio is 320:192 = 5:3.

  • phantomjs screenshots and ffmpeg frame missing

    20 octobre 2015, par Bussiness Way

    I have problem making video from website screenshots taken from phantomjs.

    the phantomjs did not make screenshots for all frames within the same second and even not all seconds there , there is huge missing frames .

    the result is high speed video playing with many jumps in video effects .

    test.js :

    var page = require('webpage').create(),
       address = 'http://raphaeljs.com/polar-clock.html',
       duration = 5, // duration of the video, in seconds
       framerate = 24, // number of frames per second. 24 is a good value.
       counter = 0,
       width = 1024,
       height = 786;
           frame = 10001;

    page.viewportSize = { width: width, height: height };

    page.open(address, function(status) {
       if (status !== 'success') {
           console.log('Unable to load the address!');
           phantom.exit(1);
       } else {
           window.setTimeout(function () {
               page.clipRect = { top: 0, left: 0, width: width, height: height };

               window.setInterval(function () {
                   counter++;
                   page.render('newtest/image'+(frame++)+'.png', { format: 'png' });
                   if (counter > duration * framerate) {
                       phantom.exit();
                   }
               }, 1/framerate);
           }, 200);
       }
    });

    this will create 120 image , this is correct count , but when you see the images one by one you will see many duplicate the same contents and many missing frames

    ffmpeg :

    fmpeg -start_number 10001 -i newtest/image%05d.png -c:v libx264 -r 24 -pix_fmt yuv420p out.mp4

    I know this script and ffmpeg command not perfect , because I did hundred of changes without lucky, and I lost the correct setting understanding .

    an anyone guide me to fix this ?.

    thank you all

  • FFmpeg streaming UDP

    2 octobre 2020, par xKedar

    I'm trying to stream, using FFmpeg, my webcam and audio to a PC in another LAN that connects to mine.

    


    I basically wait for incoming connection in order to acquire IP and port of the other side

    


        import socket

    localPort   = 1234
    bufferSize  = 1024

    UDPServerSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)
    UDPServerSocket.bind(("", localPort)) # Bind to address and port

    while(True):
        bytesAddressPair = UDPServerSocket.recvfrom(bufferSize)
        message = bytesAddressPair[0].decode("utf-8")
        address = bytesAddressPair[1]
        # Sending a reply to client
        UDPServerSocket.sendto(str.encode("Hello"), address)
        break

    UDPServerSocket.close()


    


    Then I try to send the stream with FFmpeg using the same port number both for server(localPort) and client(the one I acquired from address)

    


        import re
    from threading import Thread
    from subprocess import Popen, PIPE

    def detect_devices():
            list_cmd = 'ffmpeg -list_devices true -f dshow -i dummy'.split()
            p = Popen(list_cmd, stderr=PIPE)
            flagcam = flagmic = False
            for line in iter(p.stderr.readline,''):
                if flagcam:
                    cam = re.search('".*"',line.decode(encoding='UTF-8')).group(0)
                    cam = cam if cam else ''
                    flagcam = False
                if flagmic:
                    mic = re.search('".*"',line.decode(encoding='UTF-8')).group(0)
                    mic = mic if mic else ''
                    flagmic = False
                elif 'DirectShow video devices'.encode(encoding='UTF-8') in line:
                    flagcam = True
                elif 'DirectShow audio devices'.encode(encoding='UTF-8') in line:
                    flagmic = True
                elif 'Immediate exit requested'.encode(encoding='UTF-8') in line:
                    break
            return cam, mic   


    class ffmpegThread (Thread):
        def __init__(self, address):
            Thread.__init__(self)
            self.address = address

        def run(self):
            cam, mic = detect_devices()
            command = 'ffmpeg -f dshow -i video='+cam+':audio='+mic+' -profile:v high -pix_fmt yuvj420p -level:v 4.1 -preset ultrafast -tune zerolatency -vcodec libx264 -r 10 -b:v 512k -s 240x160 -acodec aac -ac 2 -ab 32k -ar 44100 -f mpegts -flush_packets 0 -t 40 udp://'+self.address+'?pkt_size=1316?localport='+str(localPort)
            p = Popen(command , stderr=PIPE)
            for line in iter(p.stderr.readline,''):
                if len(line) <5: break
            p.terminate()

    thread1 = ffmpegThread(address[0]+":"+str(address[1]))
    thread1.start()


    


    While on the other side I have :

    


        from threading import Thread
    import tkinter as tk
    import vlc

    class myframe(tk.Frame):
        def __init__(self, width=240, height=160):
            self.root = tk.Tk()
            super(myframe, self).__init__(self.root)
            self.root.geometry("%dx%d" % (width, height))
            self.root.wm_attributes("-topmost", 1)
            self.grid()
            self.frame = tk.Frame(self, width=240, height=160)
            self.frame.configure(bg="black")
            self.frame.grid(row=0, column=0, columnspan=2)
            self.play()
            self.root.mainloop()

        def play(self):
            self.player = vlc.Instance().media_player_new()
            self.player.set_mrl('udp://@0.0.0.0:5000')
            self.player.set_hwnd(self.frame.winfo_id())
            self.player.play()

    class guiThread (Thread):
        def __init__(self, nome):
            Thread.__init__(self)
            self.nome = nome

        def run(self):
            app = myframe()


    


    and :

    


        import socket

    msgFromClient       = "Hello UDP Server"
    bytesToSend         = str.encode(msgFromClient)
    serverAddressPort   = ("MYglobal_IPaddress", 1234)
    bufferSize          = 1024
    localPort   = 5000

    # Create a UDP socket at client side
    UDPClientSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) 
    UDPClientSocket.bind(("", localPort))

    UDPClientSocket.sendto(bytesToSend, serverAddressPort)

    msgFromServer = UDPClientSocket.recvfrom(bufferSize)
    msg = msgFromServer[0].decode("utf-8")
    print(msg)
    UDPClientSocket.close()
    gui = guiThread("ThreadGUI")
    gui.start()


    


    I'm not able to reach the client with the stream. I tested everything else so the problem should be in the way I try to reach the client