Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (96)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (13157)

  • FFMPEG keeping quality when reducing FPS and streaming over RTSP with rtsp-simple-server

    25 janvier 2021, par Jens Schneider

    I'm using the rtsp-simple-server (https://github.com/aler9/rtsp-simple-server) and feed the RTSP Server with a FFMPEG stream.

    


    I use a docker compose file to start the stream :

    


    version: "3.8"

services:

  ffmpeg:
    container_name: ffmpeg-base
    restart: always
    image: "jenssgb/rtspffmpeg:base"
    depends_on:
      - rtsp-server
    volumes:
      - $PWD/:/video
    network_mode: "host"
    command: "ffmpeg -re -stream_loop -1 -i /video/footage-1-b.mp4 -c copy -f rtsp rtsp://localhost:8554/compose-rtsp"
  
  rtsp-server:
    container_name: rtsp-server-base
    restart: always
    image: "aler9/rtsp-simple-server"
    network_mode: "host"


    


    Now I'm trying to reduce the FPS of my video with transcoding it :

    


    command: -re -stream_loop -1 -i ${VIDEO_FILE} -vf "fps=${FPS_COMPOSE}" -f rtsp rtsp://localhost:8554/compose-rtsp


    


    This is basically working, but the quality of the output video becomes pretty bad. I tried a lot of things like -c:v libx264 which did help for a minute but let ffmpeg crash then.

    


    av_interleaved_write_frame(): Broken pipe0:00:09.99 bitrate=N/A speed=0.985x    
[rtsp @ 0x5563b1755640] Packets poorly interleaved, failed to avoid negative timestamp -33660 in stream 0.
Try -max_interleave_delta 0 as a possible workaround.
av_interleaved_write_frame(): Broken pipe
Error writing trailer of rtsp://localhost:8554/compose-rtsp: Broken pipe


    


    Any idea how I can reduce the FPS send the stream to the server but keep the video quality ? Later I'm going to reduce the resolution as well - but for now I want to keep resolution and quality but only reduce the FPS.

    


    Full logs from my test with -c:v libx264 :

    


        ffmpeg -re -stream_loop -1 -i footage-1-b.mp4 -vf "fps=5" -c:v libx264 -f rtsp rtsp://localhost:8554/compose-rtsp
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
  configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'footage-1-b.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    title           : Session streamed by "nessyMediaServer"
    encoder         : Lavf58.29.100
    comment         : h264_3
  Duration: 00:59:59.63, start: 0.000000, bitrate: 2099 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc), 1280x720 [SAR 1:1 DAR 16:9], 2061 kb/s, 24.96 fps, 25 tbr, 12800 tbn, 25 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 35 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0x56277bc7f840] Too many bits 8832.000000 > 6144 per frame requested, clamping to max
[libx264 @ 0x56277bbc33c0] using SAR=1/1
[libx264 @ 0x56277bbc33c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x56277bbc33c0] profile High, level 3.1
[libx264 @ 0x56277bbc33c0] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, rtsp, to 'rtsp://localhost:8554/compose-rtsp':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    title           : Session streamed by "nessyMediaServer"
    comment         : h264_3
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: h264 (libx264), yuvj420p(pc), 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 5 fps, 90k tbn, 5 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream #0:1(und): Audio: aac (LC), 8000 Hz, mono, fltp, 48 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      encoder         : Lavc58.54.100 aac
av_interleaved_write_frame(): Broken pipe0:00:09.87 bitrate=N/A speed=0.978x    
[rtsp @ 0x56277bba0640] Packets poorly interleaved, failed to avoid negative timestamp -33660 in stream 0.
Try -max_interleave_delta 0 as a possible workaround.
av_interleaved_write_frame(): Broken pipe
Error writing trailer of rtsp://localhost:8554/compose-rtsp: Broken pipe
frame=   50 fps=4.6 q=23.0 Lsize=N/A time=00:00:10.21 bitrate=N/A speed=0.947x    
video:162kB audio:8kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[libx264 @ 0x56277bbc33c0] frame I:1     Avg QP:19.85  size:165667
[libx264 @ 0x56277bbc33c0] frame P:13    Avg QP:20.88  size:140481
[libx264 @ 0x56277bbc33c0] frame B:36    Avg QP:24.58  size: 55445
[libx264 @ 0x56277bbc33c0] consecutive B-frames:  4.0%  0.0%  0.0% 96.0%
[libx264 @ 0x56277bbc33c0] mb I  I16..4:  4.4% 30.8% 64.8%
[libx264 @ 0x56277bbc33c0] mb P  I16..4:  4.1% 10.6% 20.0%  P16..4: 24.4% 24.8% 13.3%  0.0%  0.0%    skip: 2.6%
[libx264 @ 0x56277bbc33c0] mb B  I16..4:  0.8%  2.0%  4.0%  B16..8: 40.3% 14.5%  5.2%  direct:11.8%  skip:21.4%  L0:77.1% L1: 7.9% BI:14.9%
[libx264 @ 0x56277bbc33c0] 8x8 transform intra:30.1% inter:11.9%
[libx264 @ 0x56277bbc33c0] coded y,uvDC,uvAC intra: 82.5% 60.9% 26.6% inter: 55.0% 42.4% 2.7%
[libx264 @ 0x56277bbc33c0] i16 v,h,dc,p: 17% 26% 34% 23%
[libx264 @ 0x56277bbc33c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 33% 15%  3%  4%  5%  4%  3%  9%
[libx264 @ 0x56277bbc33c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 22% 13%  3% 17% 11%  5%  3%  6%
[libx264 @ 0x56277bbc33c0] i8c dc,h,v,p: 54% 25% 16%  5%
[libx264 @ 0x56277bbc33c0] Weighted P-Frames: Y:7.7% UV:7.7%
[libx264 @ 0x56277bbc33c0] ref P L0: 33.2% 11.6% 29.0% 23.9%  2.4%
[libx264 @ 0x56277bbc33c0] ref B L0: 79.6% 11.9%  8.5%
[libx264 @ 0x56277bbc33c0] ref B L1: 95.9%  4.1%
[libx264 @ 0x56277bbc33c0] kb/s:3190.34
[aac @ 0x56277bc7f840] Qavg: 65536.000
Conversion failed!


    


    Thank you,
J

    


  • FFMPEG Take 1 second clips each video file from a directory of files, and until total playtime reached

    3 février 2021, par Matthew

    I'm working on a music art video project for a song.

    


    I have a directory that contains approximately 400 individual videos

    


      

    • varying lengths (as short as 3 seconds and as long as 31 minutes)
    • 


    • varying file types (mp4 and webm)
    • 


    • varying resolutions/framerates/bitrates
    • 


    


    The output video should consist of :

    


      

    • 1 second chunks of each video
    • 


    • in round-robin fashion
    • 


    • until all videos are played fully or the output total video length reaches a certain limit (example, 20 minutes).
    • 


    • Output video should be 1280×720 at 24fps, with no preference to bitrate.
    • 


    • Videos that are larger should be scaled down and letterboxed (vertically or horizontally).
    • 


    • Audio is not important at all. The video can be silent. I can overlay audio separately.
    • 


    


    I do not want to loop short videos. In the example below, you can see that, for top-left-view-take-1.mp4, each 'clip' that's taken is incrementally further into the video. In other words, it shouldn't take the same 1 second clip from the beginning. The goal is to get further in to each individual video as the output video progresses.

    


    For example, say my directory contains files such as the following (and, for the example here, we'll say that this is all files in the directory) :

    


    overhead-view-take-1.mp4 (3 seconds, for the sake of the question)
top-right-view-take-1.mp4 (3 seconds, for the sake of the question)
top-right-view-take-2.mp4 (5 seconds, for the sake of the question)
outside-kaleidoscope-1.mp4
yellow-kaleidoscope-1.mp4
red-kaleidoscope-1.mp4
brake-lights-slow-1.mp4
soft-city-lights.webm


    


    Assume anything that isn't marked with a video duration is at least 5 seconds long.

    


    Based on the above files & times, here would be the order and time code of each clip during the first 30 seconds of the output video :

    


    Based on each video's total duration, the output video should include 1 second clips from all videos in the directory.

    


    





    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Output time Clip time Clip file
    00:00-00:01 00:00-00:01 overhead-view-take-1.mp4
    00:01-00:02 00:00-00:01 top-right-view-take-1.mp4
    00:02-00:03 00:00-00:01 top-right-view-take-2.mp4
    00:03-00:04 00:00-00:01 outside-kaleidoscope-1.mp4
    00:04-00:05 00:00-00:01 yellow-kaleidoscope-1.mp4
    00:05-00:06 00:00-00:01 red-kaleidoscope-1.mp4
    00:06-00:07 00:00-00:01 brake-lights-slow-1.mp4
    00:07-00:08 00:00-00:01 soft-city-lights.webm
    00:08-00:09 00:01-00:02 overhead-view-take-1.mp4
    00:09-00:10 00:01-00:02 top-right-view-take-1.mp4
    00:10-00:11 00:01-00:02 top-right-view-take-2.mp4
    00:11-00:12 00:01-00:02 outside-kaleidoscope-1.mp4
    00:12-00:13 00:01-00:02 yellow-kaleidoscope-1.mp4
    00:13-00:14 00:01-00:02 red-kaleidoscope-1.mp4
    00:14-00:15 00:01-00:02 brake-lights-slow-1.mp4
    00:15-00:16 00:01-00:02 soft-city-lights.webm
    00:16-00:17 00:02-00:03 overhead-view-take-1.mp4
    00:17-00:18 00:02-00:03 top-right-view-take-1.mp4
    00:18-00:19 00:02-00:03 top-right-view-take-2.mp4
    00:19-00:20 00:02-00:03 outside-kaleidoscope-1.mp4
    00:20-00:21 00:02-00:03 yellow-kaleidoscope-1.mp4
    00:21-00:22 00:02-00:03 red-kaleidoscope-1.mp4
    00:22-00:23 00:02-00:03 brake-lights-slow-1.mp4
    00:23-00:24 00:02-00:03 soft-city-lights.webm

    


    


    Now, the output video should include 1 second clips only from videos that still have time remaining. The videos that don't have any time remaining, such as overhead-view-take-1.mp4 and top-right-view-take-1.mp4, are not present here.

    


    





    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Output time Clip time Clip file
    00:24-00:25 00:03-00:04 top-right-view-take-2.mp4
    00:25-00:26 00:03-00:04 outside-kaleidoscope-1.mp4
    00:26-00:27 00:03-00:04 yellow-kaleidoscope-1.mp4
    00:27-00:28 00:03-00:04 red-kaleidoscope-1.mp4
    00:28-00:29 00:03-00:04 brake-lights-slow-1.mp4
    00:29-00:30 00:03-00:04 soft-city-lights.webm

    


    


    What I've tried

    


      

    • I've read through the docs and cobbled together some code that produces output, but it only seems to work with images ; I can't get the same thing to work with videos (specifically the incremental chunks).
    • 


    • I've tried manipulating commands meant to `create a snapshot every x seconds/frames' but I've hit dead ends there.
    • 


    • I've also started trying to create a text file to run the input from. That's the point where I thought it would make sense to ask here.
    • 


    


    My main issue is picking off incremental chunks of individual videos, and playing those in sequence.

    


    Thoughts ?

    


    Environment details
I have access to Win, Mac, and Linux machines. So, that environment isn't as important to me. Here's ffmpeg's output :

    


    ffmpeg version 4.3.1-0york0~16.04 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
  configuration: --prefix=/usr --extra-version='0york0~16.04' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libzimg --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Hyper fast Audio and Video encoder


    


  • .mp4 file is not playing in Django template and FireFox or Chrome

    9 février 2021, par Himanshu sharma

    I can save live RTSP video stream in .mp4 but when I run saved .mp4 video in Django template or Firefox or Chrome browser or VLC, video is not playing in ubuntu.
I think I have a compatible issue problem in .mp4. Furthermore, I want to show and play .mp4 saved file in Django.

    


    I have a two IP camera which provides a live RTSP video stream.

    


    self.input_stream---> rtsp://admin:Admin123@192.168.1.208/user=admin_password=Admin123_channel=0channel_number_stream=0.sdp

self.input_stream---> rtsp://Admin:@192.168.1.209/user=Admin_password=_channel=0channel_number_stream=0.sdp


    


    Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux,

    


    Django==3.1.2

    


    I am implementing this code in difference Ubuntu PCs

    


    Ubuntu = 18.04 and 20.04

    


    opencv-contrib-python==4.4.0.46

    


    opencv-python==4.4.0.46

    


    ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)

    


    I had tried different fourcc to save .mp4 but below fourcc work perfectly in Ubuntu.

    


    fourcc = cv2.VideoWriter_fourcc(*'mp4v')

    


    class ffmpegStratStreaming(threading.Thread):
    def __init__(self, input_stream=None, output_stream=None, camera=None, *args, **kwargs):
        self.input_stream = input_stream
        self.output_stream = output_stream
        self.camera = camera
        super().__init__(*args, **kwargs)

    def run(self):
        try:vs = cv2.VideoCapture(self.input_stream)
            fps_rate = int(vs.get(cv2.CAP_PROP_FPS))
            ##############################
            ##############################  
            # ~ print('fps rate-->', fps_rate,'camera id-->' ,str(self.camera.id))
            # ~ vs.set(cv2.CAP_PROP_POS_FRAMES,50)  #Set the frame number to be obtained
            # ~ print('fps rate-->', fps_rate,'camera id-->' ,str(self.camera.id),' ####### ')
            
            # initialize the video writer (we'll instantiate later if need be)
            writer = None

            # initialize the frame dimensions (we'll set them as soon as we read
            # the first frame from the video)
            W = None
            H = None    
            # start the frames per second throughput estimator
            fps = FPS().start()

            #  saving frame in avi video format
            video_file_count = 0
            start_time = time.time()

            while True:
                try:
                    # grab the next frame and handle if we are reading from either
                    # VideoCapture or VideoStream
                    
                    frame_init = vs.read()
                    frame = frame_init[1] if self.input_stream else frame_init
                    
                    # if frame is can't read correctly ret is False
                    while frame_init[0] == False:
                        print("Can't receive frame. Retrying ...")
                        vs.release()
                        vs = cv2.VideoCapture(self.input_stream)                                                                              
                        frame_init = vs.read()
                        frame = frame_init[1] if self.input_stream else frame_init

                    # if we are viewing a video and we did not grab a frame then we
                    # have reached the end of the video
                    if self.input_stream is not None and frame is None:
                        break

                    # resize the frame to have a maximum width of 500 pixels (the
                    # less data we have, the faster we can process it), then convert
                    # the frame from BGR to RGB for dlib
                    frame = imutils.resize(frame, width=500)        
                    
                    
                    #<---------------------- Start of  writing a video to disk ------------------------->                   
                    minute = 1
                    second  = 60
                    minite_to_save_video = int(minute) * int(second)

                
                    # if we are supposed to be writing a video to disk, initialize
                    if time.time() - start_time >= minite_to_save_video or  video_file_count == 0 :
                        ## where H = heigth, W = width, C = channel 
                        H, W, C = frame.shape
                        video_file_count += 1
                        start_time = time.time()
                        output_save_directory = self.output_stream+str(video_file_count)+'.mp4'
                        # fourcc = cv2.VideoWriter_fourcc(*"MJPG")
                        # fourcc = cv2.VideoWriter_fourcc(*'XVID')
                        # fourcc = cv2.VideoWriter_fourcc(*'X264')
                        # fourcc = cv2.VideoWriter_fourcc(*'MP4V')

                        
                        # fourcc = cv2.VideoWriter_fourcc(*'FMP4')

                        # fourcc = cv2.VideoWriter_fourcc(*'avc1')


                        # fourcc = cv2.VideoWriter_fourcc('M','J','P','G')

                        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
                        # fourcc = cv2.VideoWriter_fourcc(*'mp3v')
                        # fourcc = 0x00000021
                        print(fourcc, type(fourcc),'ffffffff')
                        # a = int(vs.get(cv2.CAP_PROP_FOURCC))
                        # print(a,type(a),'  aaaaaaaa' )

                        # writer = skvideo.io.FFmpegWriter(output_save_directory, outputdict={
                        #   '-vcodec': 'libx264', '-b': '300000000'
                        # })
                        
                        # writer = skvideo.io.FFmpegWriter(self.output_stream, outputdict={'-r': '120', '-c:v': 'libx264', '-crf': '0', '-preset': 'ultrafast', '-pix_fmt': 'yuv444p'})
                        
                        writer = cv2.VideoWriter(output_save_directory, fourcc ,20.0,( int(W), int(H) ), True)
                        
                        
                        # ~ The cv2.VideoWriter requires five parameters:    

                    # check to see if we should write the frame to disk
                    if writer is not None:                          
                        try:
                            writer.write(frame)
                        except Exception as e:
                            print('Error in writing video output---> ', e)
                            
                    #<---------------------- end of  writing a video to disk ------------------------->

                    # show the output frame
                    # cv2.imshow("Frame", frame)
                    # key = cv2.waitKey(1) & 0xFF

                    # if the `q` key was pressed, break from the loop
                    # if key == ord("q"):
                    #   break

                    # increment the total number of frames processed thus far and
                    # then update the FPS counter
                    totalFrames += 1
                    fps.update()
                except Exception as e:
                    print('Error in main while loop--> ', e)

            # stop the timer and display FPS information
            fps.stop()

            # check to see if we need to release the video writer pointer
            # if writer is not None:
            #   writer.release()

            # if we are not using a video file, stop the camera video stream
            # if not self.input_stream:
            #   vs.stop()

            # otherwise, release the video file pointer
            # else:
            #   vs.release()

            # close any open windows
            # cv2.destroyAllWindows()
        except Exception as e:
            print(e, '333333333333333333333333333')


    


    My Django template code

    


    {% block main %}&#xA;<div class="row">&#xA;  {% if folders|length == 0 %}&#xA;    <div class="col-md-12 text-center">&#xA;      <h6 class="card-title">There are no files in this directory.</h6>&#xA;    </div>&#xA;  {% else %}&#xA;    {% for folder in folders %}&#xA;      <div class="col-md-3 text-center">&#xA;          <div class="card-block">&#xA;            <video class="video-js vjs-fluid vjs-default-skin" controls="controls" preload="auto" muted="muted" data-setup="&#x27;{" true="true"></video>&#xA;                <source src="{% get_media_prefix %}camera-feed/video-saved/{{pk}}/{{parent}}/{{folder}}" type="video/mp4">&#xA;            &#xA;            <h6 class="card-title">{{folder}}</h6>&#xA;          </source></div>&#xA;      </div>&#xA;    {% endfor %}&#xA;  {% endif %}&#xA;</div>&#xA;{% endblock %}&#xA;

    &#xA;

    Function in view.py to see saved .mp4 video in Django template. Video is saved at local folder.

    &#xA;

    def CameraVideos(request, pk, folder):&#xA;    dirpath = settings.MEDIA_ROOT &#x2B; &#x27;/camera-feed/video-saved/&#x27; &#x2B; str(pk) &#x2B; &#x27;/&#x27; &#x2B; folder&#xA;    folders = sorted(Path(dirpath).iterdir(), key=os.path.getmtime)&#xA;    data = []&#xA;    for file in folders:&#xA;        if not file.name.startswith(&#x27;.&#x27;):&#xA;            data.append(file.name)&#xA;    return render(request, "home/camera_video.html", {&#x27;folders&#x27;: data, &#x27;pk&#x27;: pk, &#x27;parent&#x27;: folder})&#xA;

    &#xA;