Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (65)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (9197)

  • PyInstaller —noconsole still shows the console after running the app

    23 septembre 2020, par Kiren78

    I've built an app to download and play sound everytime someone inserts or removes USB drive from PC.
    
Code :

    


    from playsound import playsound
from win10toast import ToastNotifier
from time import sleep
from typing import Callable
import threading
import os
import youtube_dl
import win32file


def play_audio():
    try:
        path = os.getcwd() + "\\audio.mp3"
        ydl_opts = {
            'format': 'bestaudio/best',
            'postprocessors': [{
                'key': 'FFmpegExtractAudio',
                'preferredcodec': 'mp3',
                'preferredquality': '192',
            }],
            'outtmpl': path
        }

        with youtube_dl.YoutubeDL(ydl_opts) as ydl:
            ydl.download(['https://www.youtube.com/watch?v=_0HTwQjMr9k'])

        playsound(path)
    except Exception as e:
        toast = ToastNotifier()
        toast.show_toast("RIP prank failed byq", "no ogolnie prank failed rip co jest?", duration=20)


def get_drives():
    drive_list = []
    drivebits = win32file.GetLogicalDrives()
    for d in range(1, 26):
        mask = 1 << d
        if drivebits & mask:
            drname = '%c:\\' % chr(ord('A') + d)
            t = win32file.GetDriveType(drname)
            if t == win32file.DRIVE_REMOVABLE:
                drive_list.append(drname)
    return drive_list


def watch_drives(on_change: Callable[[dict], None] = print, poll_interval: int = 1):
    def _watcher():
        global prev
        while True:
            drives = get_drives()
            if prev != drives:
                on_change(drives)
                play_audio()
                prev = drives
            sleep(poll_interval)

    t = threading.Thread(target=_watcher)
    t.start()
    t.join()


if __name__ == '__main__':
    prev = get_drives()
    watch_drives(on_change=print)


    


    I don't understand it but everytime the download starts and FFmpeg starts debugging everything (using youtube-dl) a couple of console windows appear for a fraction of a second and they immediately disappear. How can I TOTALLY disable the console so that even FFmpeg can't open it ?

    


    EDIT : Yes, I've already tried using --windowed and -w parameters in PyInstaller

    


  • Converting mp4 to webm , ogg formats

    24 août 2016, par user2943893

    Currently i need to convert mp4 video to webm and ogg . To convert mp4 to webm i have used "ffmpeg.exe". I am running following code to convert video mp4 to webm.

    [DllImport("User32.dll")]
    public static extern bool SetForegroundWindow(IntPtr hWnd);
    public void mciConvertWavMP3(string fileName, bool waitFlag)
    {

       string savepath = Server.MapPath(fileName);
       string destpath = Server.MapPath(fileName);
       string pworkingDir = Server.MapPath("~/ffmpeg/");

    // string outfile = "-b:a 16 --resample 24 -m j " + savepath + " " + savepath.Replace(".wav", ".mp3") + ""; //--- lame code
     //  string outfile = "-b 192k -i " + savepath + " " + destpath.Replace(".mp4", ".webm");
      // string outfile = "ffmpeg -i " + savepath + " -acodec libvorbis -ac 2 -ab 96k -ar 44100 -b 345k -s 640x360 " + Server.MapPath("output-file.webm");

       string outfile = "ffmpeg -i \"test7.mp4\" -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis \"" + Server.MapPath("output-file.webm") + "\"";
      // string outfile = "ffmpeg -i \""+fileName+"\" -codec:v libvpx -quality good -cpu-used 0 -b:v 600k -qmin 10 -qmax 42 -maxrate 500k -bufsize 1000k -threads 2 -vf scale=-1:480 -an -pass 1 -f webm /dev/null";

       System.Diagnostics.ProcessStartInfo psi = new System.Diagnostics.ProcessStartInfo();
       psi.FileName = pworkingDir+"ffmpeg.exe";
       psi.Arguments = outfile;
       psi.UseShellExecute = true;
       psi.CreateNoWindow = false;
       System.Diagnostics.Process p = System.Diagnostics.Process.Start(psi);
       Thread.Sleep(1000);// utput.webm
       if (waitFlag)
       {
           p.WaitForExit();
           // wait for exit of called application
       }
    }

    I kept my project folder in D :/ drive

    When i am running from Command promt its working fine. But when i am running this code its not working fine.

    Formal errors which i am getting are :

    " unable to find a suitable output format for ’ffmpeg’ " kind of errors. SO please can any one help to solve this issue.

    Thank & Regards

  • Create Panorama from Non-Sequential Video Frames

    6 mai 2021, par M.Innat

    There is a similar question (not that detailed and no exact solution).

    



    


    I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.

    


    What I Need

    


    A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.

    


    enter image description here

    


    So, if I have the following video frame's order

    


    A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...


    


    To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.

    


         [overlap]  [overlap]  [overlap] [overlap]  [overlap]
 A,    A,B,       B,C,       C,A,       A,C,      C,B,  ...


    


    What I've Tried and Stuck

    


    A demo video clip is given above. To get non-sequential video frames, I primarily rely on ffmpeg software.

    


    Trial 1 Ref.

    


    ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    After that, on the out.mp4, I applied slice the video frames using opencv

    


    import cv2, os 
from pathlib import Path

vframe_dir = Path("vid_frames/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vidcap = cv2.VideoCapture('out.mp4')
success,image = vidcap.read()
count = 0

while success:
    cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image)     
    success,image = vidcap.read()
    count += 1


    


    Next, I rotated these saved images horizontally (as my video is a vertical view).

    


    vframe_dir = Path("out/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vframe_dir_rot = Path("vframe_dir_rot/")
vframe_dir_rot.mkdir(parents=True, exist_ok=True)

for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))):
    image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB)
    
    image = cv2.rotate(image,cv2.cv2.ROTATE_180)
    image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE)

    cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)


    


    The output is ok for this method (with ffmpeg) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.

    
 


    Trail 2 - Ref

    


    ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    didn't work at all.

    


    Trail 3

    


    ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png


    


    No luck either. However, I've found this last ffmpeg command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still do not need frames, and I kinda manually pick some desired frames, and then the opecv stitching algorithm works. So, after picking some frames and rotating (as mentioned before) :

    


    stitcher = cv2.Stitcher.create()
status, pano = stitcher.stitch(images) # images: manually picked video frames -_- 


    
 


    Update

    


    After some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.

    


    On the given demo video, I used Adobe products (premiere pro and photoshop) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via premier and use photoshop to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.

    


    enter image description here

    



    


    Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips :

    


      

    • It's not straight forward, i.e. camera shaking
    • 


    • Lighting condition, i.e. causes different visual look at the same spot
    • 


    • Cameral flickering or banding
    • 


    


    This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using adobe tools) I couldn't make it any good.

    



    


    However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.