Recherche avancée

Médias (91)

Autres articles (107)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (9238)

  • How to Record Video of a Dynamic Div Containing Multiple Media Elements in React Konva ?

    14 septembre 2024, par Humayoun Saeed

    I'm working on a React application where I need to record a video of a specific div with the class name "layout." This div contains multiple media elements (such as images and videos) that are dynamically rendered inside divisions. I've tried several approaches, including using MediaRecorder, canvas-based recording with html2canvas, RecordRTC, and even ffmpeg, but none seem to capture the entire div along with its dynamic content effectively.

    


    What would be the best approach to achieve this ? How can I record a video of this dynamically rendered div including all its media elements, ensuring a smooth capture of the transitions ?

    


    What I’ve Tried :
MediaRecorder API : Didn't work effectively for capturing the entire div and its elements.
html2canvas : Captures snapshots but struggles with smooth transitions between media elements.
RecordRTC HTML Element Recording : Attempts to capture the canvas, but the output video size is 0 bytes.
CanvasRecorder, FFmpeg, and various other libraries also didn't provide the desired result.

    


    import React, { useEffect, useState, useRef } from "react";&#xA;&#xA;const Preview = ({ layout, onClose }) => {&#xA;  const [currentContent, setCurrentContent] = useState([]);&#xA;  const totalDuration = useRef(0);&#xA;  const videoRefs = useRef([]); // Store refs to each video element&#xA;  const [totalTime, setTotalTime] = useState(0); // Add this line&#xA;  const [elapsedTime, setElapsedTime] = useState(0); // Track elapsed time in seconds&#xA;&#xA;  // video recording variable and state declaration&#xA;  //  video recorder end&#xA;  // for video record useffect&#xA;  // Function to capture the renderDivision content&#xA;&#xA;  const handleDownload = async () => {&#xA;    console.log("video downlaod function in developing mode.");&#xA;  };&#xA;&#xA;  // end video record useffect&#xA;&#xA;  // to apply motion and swtich in media of division start&#xA;  useEffect(() => {&#xA;    if (layout &amp;&amp; layout.divisions) {&#xA;      const content = layout.divisions.map((division) => {&#xA;        let divisionDuration = 0;&#xA;&#xA;        division.imageSrcs.forEach((src, index) => {&#xA;          const mediaDuration = division.durations[index]&#xA;            ? division.durations[index] * 1000 // Convert to milliseconds&#xA;            : 5000; // Fallback to 5 seconds if duration is missing&#xA;          divisionDuration &#x2B;= mediaDuration;&#xA;        });&#xA;&#xA;        return {&#xA;          division,&#xA;          contentIndex: 0,&#xA;          divisionDuration,&#xA;        };&#xA;      });&#xA;&#xA;      // Find the maximum duration&#xA;      const maxDuration = Math.max(...content.map((c) => c.divisionDuration));&#xA;&#xA;      // Filter divisions that have the max duration&#xA;      const maxDurationDivisions = content.filter(&#xA;        (c) => c.divisionDuration === maxDuration&#xA;      );&#xA;&#xA;      // Select the first one if there are multiple with the same max duration&#xA;      const selectedMaxDurationDivision = maxDurationDivisions[0];&#xA;&#xA;      totalDuration.current = selectedMaxDurationDivision.divisionDuration; // Update the total duration in milliseconds&#xA;&#xA;      setTotalTime(Math.floor(totalDuration.current / 1000000)); // Convert to seconds and set in state&#xA;&#xA;      // console.log(&#xA;      //   "Division with max duration (including ties):",&#xA;      //   selectedMaxDurationDivision&#xA;      // );&#xA;&#xA;      setCurrentContent(content);&#xA;    }&#xA;  }, [layout]);&#xA;&#xA;  useEffect(() => {&#xA;    if (currentContent.length > 0) {&#xA;      const timers = currentContent.map(({ division, contentIndex }, i) => {&#xA;        const duration = division.durations[contentIndex]&#xA;          ? division.durations[contentIndex] // Duration is already in ms&#xA;          : 5000; // Default to 5000ms if no duration is defined&#xA;&#xA;        const mediaElement = videoRefs.current[i];&#xA;        if (mediaElement &amp;&amp; mediaElement.pause) {&#xA;          mediaElement.pause();&#xA;        }&#xA;&#xA;        // Set up a timeout for each division to move to the next media after duration&#xA;        const timeoutId = setTimeout(() => {&#xA;          // Update content for each division independently&#xA;          updateContent(i, division, contentIndex, duration); // Move to the next content after duration&#xA;&#xA;          // Ensure proper cleanup&#xA;          if (contentIndex &#x2B; 1 >= division.imageSrcs.length) {&#xA;            clearTimeout(timeoutId); // Clear timeout to stop looping&#xA;          }&#xA;        }, duration);&#xA;&#xA;        // Cleanup timers on component unmount&#xA;        return timeoutId;&#xA;      });&#xA;&#xA;      // Return cleanup function to clear all timeouts&#xA;      return () => timers.forEach((timer) => clearTimeout(timer));&#xA;    }&#xA;  }, [currentContent]);&#xA;  // to apply motion and swtich in media of division end&#xA;&#xA;  // Handle video updates when the duration is changed or a new video starts&#xA;  const updateContent = (i, division, contentIndex, duration) => {&#xA;    const newContent = [...currentContent];&#xA;&#xA;    // Check if we are on the last media item&#xA;    if (contentIndex &#x2B; 1 &lt; division.imageSrcs.length) {&#xA;      // Move to next media if not the last one&#xA;      newContent[i].contentIndex = contentIndex &#x2B; 1;&#xA;    } else {&#xA;      // If this is the last media item, pause here&#xA;      newContent[i].contentIndex = contentIndex; // Keep it at the last item&#xA;      setCurrentContent(newContent);&#xA;&#xA;      // Handle video pause if the last media is a video&#xA;      const mediaElement = videoRefs.current[i];&#xA;      if (mediaElement &amp;&amp; mediaElement.tagName === "VIDEO") {&#xA;        mediaElement.pause();&#xA;        mediaElement.currentTime = mediaElement.duration; // Pause at the end of the video&#xA;      }&#xA;      return; // Exit the function as we don&#x27;t want to loop anymore&#xA;    }&#xA;&#xA;    // Update state to trigger rendering of the next media&#xA;    setCurrentContent(newContent);&#xA;&#xA;    // Handle video playback for the next media item&#xA;    const mediaElement = videoRefs.current[i];&#xA;    if (mediaElement) {&#xA;      mediaElement.pause();&#xA;      mediaElement.currentTime = 0;&#xA;      mediaElement&#xA;        .play()&#xA;        .catch((error) => console.error("Error playing video:", error));&#xA;    }&#xA;  };&#xA;&#xA;  const renderDivision = (division, contentIndex, index) => {&#xA;    const mediaSrc = division.imageSrcs[contentIndex];&#xA;&#xA;    if (!division || !division.imageSrcs || division.imageSrcs.length === 0) {&#xA;      return (&#xA;        &#xA;          <p>No media available</p>&#xA;        &#xA;      );&#xA;    }&#xA;&#xA;    if (!mediaSrc) {&#xA;      return (&#xA;        &#xA;          <p>No media available</p>&#xA;        &#xA;      );&#xA;    }&#xA;&#xA;    if (mediaSrc.endsWith(".mp4")) {&#xA;      return (&#xA;        > (videoRefs.current[index] = el)}&#xA;          src={mediaSrc}&#xA;          autoPlay&#xA;          controls={false}&#xA;          style={{&#xA;            width: "100%",&#xA;            height: "100%",&#xA;            objectFit: "cover",&#xA;            pointerEvents: "none",&#xA;          }}&#xA;          onLoadedData={() => {&#xA;            // Ensure video is properly loaded&#xA;            const mediaElement = videoRefs.current[index];&#xA;            if (mediaElement &amp;&amp; mediaElement.readyState >= 3) {&#xA;              mediaElement.play().catch((error) => {&#xA;                console.error("Error attempting to play the video:", error);&#xA;              });&#xA;            }&#xA;          }}&#xA;        />&#xA;      );&#xA;    } else {&#xA;      return (&#xA;        &#xA;      );&#xA;    }&#xA;  };&#xA;&#xA;  // progress bar code start&#xA;  useEffect(() => {&#xA;    if (totalDuration.current > 0) {&#xA;      // Reset elapsed time at the start&#xA;      setElapsedTime(0);&#xA;&#xA;      const interval = setInterval(() => {&#xA;        setElapsedTime((prevTime) => {&#xA;          // Increment the elapsed time by 1 second if it&#x27;s less than the total time&#xA;          if (prevTime &lt; totalTime) {&#xA;            return prevTime &#x2B; 1;&#xA;          } else {&#xA;            clearInterval(interval); // Clear the interval when totalTime is reached&#xA;            return prevTime;&#xA;          }&#xA;        });&#xA;      }, 1000); // Update every second&#xA;&#xA;      // Clean up the interval on component unmount&#xA;      return () => clearInterval(interval);&#xA;    }&#xA;  }, [totalTime]);&#xA;&#xA;  // progress bar code end&#xA;&#xA;  return (&#xA;    &#xA;      &#xA;        &#xA;          Close&#xA;        &#xA;        <h2>Preview Layout: {layout.name}</h2>&#xA;        &#xA;          {currentContent.map(({ division, contentIndex }, i) => (&#xA;            &#xA;              {renderDivision(division, contentIndex, i)}&#xA;            &#xA;          ))}&#xA;          {/* canvas code for video start */}&#xA;          {/* canvas code for video end */}&#xA;          {/* Progress Bar and Time */}&#xA;          / Background color for progress bar track&#xA;              display: "flex",&#xA;              justifyContent: "space-between",&#xA;              alignItems: "center",&#xA;            }}&#xA;          >&#xA;             totalTime) * 100}%)`,&#xA;                backgroundColor: "#28a745", // Green color for progress bar&#xA;                transition: "width 0.5s linear", // Smooth transition&#xA;              }}&#xA;            >&#xA;&#xA;            {/* Time display */}&#xA;            {/* / Fixed right margin&#xA;                zIndex: 1, // Ensure it&#x27;s above the progress bar&#xA;                padding: "5px",&#xA;                fontSize: "18px",&#xA;                fontWeight: "600",&#xA;                color: "#333",&#xA;                // backgroundColor: "rgba(255, 255, 255, 0.8)", // Add a subtle background for readability&#xA;              }}&#xA;            >&#xA;              {elapsedTime} / {totalTime}s&#xA;             */}&#xA;          &#xA;        &#xA;&#xA;        {/* Download button */}&#xA;        > (e.target.style.backgroundColor = "#218838")}&#xA;          onMouseOut={(e) => (e.target.style.backgroundColor = "#28a745")}&#xA;        >&#xA;          Download Video&#xA;        &#xA;        {/* {recording &amp;&amp; <p>Recording in progress...</p>} */}&#xA;      &#xA;    &#xA;  );&#xA;};&#xA;&#xA;export default Preview;&#xA;&#xA;

    &#xA;

    I tried several methods to record the content of the div with the class "layout," which contains dynamic media elements such as images and videos. The approaches I attempted include :

    &#xA;

    MediaRecorder API : I expected this API to capture the entire div and its contents, but it didn't handle the rendering of all dynamic media elements properly.

    &#xA;

    html2canvas : I used this to capture the layout as a canvas and then attempted to convert it into a video stream. However, it could not capture smooth transitions between media elements, leading to a choppy or incomplete video output.

    &#xA;

    RecordRTC : I integrated RecordRTC to capture the canvas stream of the div. Despite setting up the recorder, the resulting video file either had a 0-byte size or only captured parts of the content inconsistently.

    &#xA;

    FFmpeg and other libraries : I explored these tools hoping they would provide a seamless capture of the dynamic content, but they also failed to capture the full media elements, including videos playing within the layout.

    &#xA;

    In all cases, I expected to get a complete video recording of the div, including all media transitions, but the results were incomplete or not functional.

    &#xA;

    Now, I’m seeking an approach or best practice to record the entire div with its dynamic content and media playback.

    &#xA;

  • Frames took with ELP camera has unknown pixel format at FHD ?

    11 novembre 2024, par Marcel Kopera

    I'm trying to take a one frame ever x seconds from my usb camera. Name of the camera is : ELP-USBFHD06H-SFV(5-50).&#xA;Code is not 100% done yet, but I'm using it this way right now ↓ (shot fn is called from main.py in a loop)

    &#xA;

    &#xA;import cv2&#xA;import subprocess&#xA;&#xA;from time import sleep&#xA;from collections import namedtuple&#xA;&#xA;from errors import *&#xA;&#xA;class Camera:&#xA;    def __init__(self, cam_index, res_width, res_height, pic_format, day_time_exposure_ms, night_time_exposure_ms):&#xA;        Resolution = namedtuple("resolution", ["width", "height"])&#xA;        self.manual_mode(True)&#xA;&#xA;        self.cam_index = cam_index&#xA;        self.camera_resolution = Resolution(res_width, res_height)&#xA;        self.picture_format = pic_format&#xA;        self.day_time_exposure_ms = day_time_exposure_ms&#xA;        self.night_time_exposure_ms = night_time_exposure_ms&#xA;&#xA;        self.started: bool = False&#xA;        self.night_mode = False&#xA;&#xA;        self.cap = cv2.VideoCapture(self.cam_index, cv2.CAP_V4L2)&#xA;        self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.camera_resolution.width)&#xA;        self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.camera_resolution.height)&#xA;        self.cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*self.picture_format))&#xA;&#xA;    &#xA;&#xA;    def start(self):&#xA;        sleep(1)&#xA;        if not self.cap.isOpened():&#xA;            return CameraCupError()&#xA;&#xA;        self.set_exposure_time(self.day_time_exposure_ms)&#xA;        self.set_brightness(0)&#xA;        sleep(0.1)&#xA;        &#xA;        self.started = True&#xA;&#xA;&#xA;&#xA;    def shot(self, picture_name, is_night):&#xA;        if not self.started:&#xA;            return InitializationError()&#xA;&#xA;        self.configure_mode(is_night)&#xA;&#xA;        # Clear buffer&#xA;        for _ in range(5):&#xA;            ret, _ = self.cap.read()&#xA;&#xA;        ret, frame = self.cap.read()&#xA;&#xA;        sleep(0.1)&#xA;&#xA;        if ret:&#xA;            print(picture_name)&#xA;            cv2.imwrite(picture_name, frame)&#xA;            return True&#xA;&#xA;        else:&#xA;            print("No photo")&#xA;            return False&#xA;&#xA;&#xA;    &#xA;    def release(self):&#xA;        self.set_exposure_time(156)&#xA;        self.set_brightness(0)&#xA;        self.manual_mode(False)&#xA;        self.cap.release()&#xA;&#xA;&#xA;&#xA;    def manual_mode(self, switch: bool):&#xA;        if switch:&#xA;            subprocess.run(["v4l2-ctl", "--set-ctrl=auto_exposure=1"])&#xA;        else:&#xA;            subprocess.run(["v4l2-ctl", "--set-ctrl=auto_exposure=3"])&#xA;        sleep(1)&#xA;&#xA;    &#xA;    &#xA;    def configure_mode(self, is_night):&#xA;        if is_night == self.night_mode:&#xA;            return&#xA;&#xA;        if is_night:&#xA;            self.night_mode = is_night&#xA;            self.set_exposure_time(self.night_time_exposure_ms)&#xA;            self.set_brightness(64)&#xA;        else:&#xA;            self.night_mode = is_night&#xA;            self.set_exposure_time(self.day_time_exposure_ms)&#xA;            self.set_brightness(0)&#xA;        sleep(0.1)&#xA;&#xA;&#xA;&#xA;    def set_exposure_time(self, ms: int):&#xA;        ms = int(ms)&#xA;        default_val = 156&#xA;&#xA;        if ms &lt; 1 or ms > 5000:&#xA;            ms = default_val&#xA;&#xA;        self.cap.set(cv2.CAP_PROP_EXPOSURE, ms)&#xA;&#xA;&#xA;&#xA;    def set_brightness(self, value: int):&#xA;        value = int(value)&#xA;        default_val = 0&#xA;&#xA;        if value &lt; -64 or value > 64:&#xA;            value = default_val&#xA;&#xA;        self.cap.set(cv2.CAP_PROP_BRIGHTNESS, value)&#xA;

    &#xA;

    Here are settings for the camera (yaml file)

    &#xA;

    camera:&#xA;  camera_index: 0&#xA;  res_width: 1920&#xA;  res_height: 1080&#xA;  picture_format: "MJPG"&#xA;  day_time_exposure_ms: 5&#xA;  night_time_exposure_ms: 5000&#xA;  photos_format: "jpg"&#xA;&#xA;

    &#xA;

    I do some configs like set manual mode for the camera, change exposure/brightness and saving frame.&#xA;Also the camera is probably catching the frames to the buffer (it is not saving latest frame in real time : it's more laggish), so I have to clear buffer every time. like this

    &#xA;

            # Clear buffer from old frames&#xA;        for _ in range(5):&#xA;            ret, _ = self.cap.read()&#xA;        &#xA;        # Get a new frame&#xA;        ret, frame = self.cap.read()&#xA;

    &#xA;

    What I really don't like, but I could find a better way (tldr : setting buffer for 1 frame doesn't work on my camera).

    &#xA;

    Frames saved this method looks good with 1920x1080 resolution. BUT when I try to run ffmpeg command to make a timelapse from saved jpg file like this

    &#xA;

    ffmpeg -framerate 20 -pattern_type glob -i "*.jpg" -c:v libx264 output.mp4&#xA;

    &#xA;

    I got an error like this one

    &#xA;

    [image2 @ 0x555609c45240] Could not open file : 08:59:20.jpg&#xA;[image2 @ 0x555609c45240] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size&#xA;Consider increasing the value for the &#x27;analyzeduration&#x27; (0) and &#x27;probesize&#x27; (5000000) options&#xA;Input #0, image2, from &#x27;*.jpg&#x27;:&#xA;  Duration: 00:00:00.05, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0: Video: mjpeg, none(bt470bg/unknown/unknown), 20 fps, 20 tbr, 20 tbn&#xA;Output #0, mp4, to &#x27;output.mp4&#x27;:&#xA;Output file #0 does not contain any stream&#xA;

    &#xA;

    Also when I try to copy the files from Linux to Windows I get some weird copy failing error and option to skip the picture. But even when I press the skip button, the picture is copied and can be opened. I'm not sure what is wrong with the format, but the camera is supporting MPEG at 1920x1080.

    &#xA;

    >>> v4l2-ctl --all&#xA;&#xA;Driver Info:&#xA;        Driver name      : uvcvideo&#xA;        Card type        : H264 USB Camera: USB Camera&#xA;        Bus info         : usb-xhci-hcd.1-1&#xA;        Driver version   : 6.6.51&#xA;        Capabilities     : 0x84a00001&#xA;                Video Capture&#xA;                Metadata Capture&#xA;                Streaming&#xA;                Extended Pix Format&#xA;                Device Capabilities&#xA;        Device Caps      : 0x04200001&#xA;                Video Capture&#xA;                Streaming&#xA;                Extended Pix Format&#xA;Media Driver Info:&#xA;        Driver name      : uvcvideo&#xA;        Model            : H264 USB Camera: USB Camera&#xA;        Serial           : 2020032801&#xA;        Bus info         : usb-xhci-hcd.1-1&#xA;        Media version    : 6.6.51&#xA;        Hardware revision: 0x00000100 (256)&#xA;        Driver version   : 6.6.51&#xA;Interface Info:&#xA;        ID               : 0x03000002&#xA;        Type             : V4L Video&#xA;Entity Info:&#xA;        ID               : 0x00000001 (1)&#xA;        Name             : H264 USB Camera: USB Camera&#xA;        Function         : V4L2 I/O&#xA;        Flags            : default&#xA;        Pad 0x0100000d   : 0: Sink&#xA;          Link 0x0200001a: from remote pad 0x1000010 of entity &#x27;Extension 4&#x27; (Video Pixel Formatter): Data, Enabled, Immutable&#xA;Priority: 2&#xA;Video input : 0 (Camera 1: ok)&#xA;Format Video Capture:&#xA;        Width/Height      : 1920/1080&#xA;        Pixel Format      : &#x27;MJPG&#x27; (Motion-JPEG)&#xA;        Field             : None&#xA;        Bytes per Line    : 0&#xA;        Size Image        : 4147789&#xA;        Colorspace        : sRGB&#xA;        Transfer Function : Default (maps to sRGB)&#xA;        YCbCr/HSV Encoding: Default (maps to ITU-R 601)&#xA;        Quantization      : Default (maps to Full Range)&#xA;        Flags             :&#xA;Crop Capability Video Capture:&#xA;        Bounds      : Left 0, Top 0, Width 1920, Height 1080&#xA;        Default     : Left 0, Top 0, Width 1920, Height 1080&#xA;        Pixel Aspect: 1/1&#xA;Selection Video Capture: crop_default, Left 0, Top 0, Width 1920, Height 1080, Flags:&#xA;Selection Video Capture: crop_bounds, Left 0, Top 0, Width 1920, Height 1080, Flags:&#xA;Streaming Parameters Video Capture:&#xA;        Capabilities     : timeperframe&#xA;        Frames per second: 15.000 (15/1)&#xA;        Read buffers     : 0&#xA;&#xA;User Controls&#xA;&#xA;                     brightness 0x00980900 (int)    : min=-64 max=64 step=1 default=0 value=64&#xA;                       contrast 0x00980901 (int)    : min=0 max=64 step=1 default=32 value=32&#xA;                     saturation 0x00980902 (int)    : min=0 max=128 step=1 default=56 value=56&#xA;                            hue 0x00980903 (int)    : min=-40 max=40 step=1 default=0 value=0&#xA;        white_balance_automatic 0x0098090c (bool)   : default=1 value=1&#xA;                          gamma 0x00980910 (int)    : min=72 max=500 step=1 default=100 value=100&#xA;                           gain 0x00980913 (int)    : min=0 max=100 step=1 default=0 value=0&#xA;           power_line_frequency 0x00980918 (menu)   : min=0 max=2 default=1 value=1 (50 Hz)&#xA;                                0: Disabled&#xA;                                1: 50 Hz&#xA;                                2: 60 Hz&#xA;      white_balance_temperature 0x0098091a (int)    : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive&#xA;                      sharpness 0x0098091b (int)    : min=0 max=6 step=1 default=3 value=3&#xA;         backlight_compensation 0x0098091c (int)    : min=0 max=2 step=1 default=1 value=1&#xA;&#xA;Camera Controls&#xA;&#xA;                  auto_exposure 0x009a0901 (menu)   : min=0 max=3 default=3 value=1 (Manual Mode)&#xA;                                1: Manual Mode&#xA;                                3: Aperture Priority Mode&#xA;         exposure_time_absolute 0x009a0902 (int)    : min=1 max=5000 step=1 default=156 value=5000&#xA;     exposure_dynamic_framerate 0x009a0903 (bool)   : default=0 value=0&#xA;

    &#xA;

    I also tried to save the picture using ffmpeg in a case something is not right with opencv like this :

    &#xA;

    ffmpeg -f v4l2 -framerate 30 -video_size 1920x1080 -i /dev/video0 -c:v libx264 -preset fast -crf 23 -t 00:01:00 output.mp4&#xA;&#xA;

    &#xA;

    It is saving the picture but also changing its format

    &#xA;

    [video4linux2,v4l2 @ 0x555659ed92b0] The V4L2 driver changed the video from 1920x1080 to 800x600&#xA;[video4linux2,v4l2 @ 0x555659ed92b0] The driver changed the time per frame from 1/30 to 1/15&#xA;

    &#xA;

    But the format looks right when set it back to FHD using v4l2

    &#xA;

    &#xA;>>> v4l2-ctl --device=/dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=MJPG&#xA;>>> v4l2-ctl --get-fmt-video&#xA;&#xA;Format Video Capture:&#xA;        Width/Height      : 1920/1080&#xA;        Pixel Format      : &#x27;MJPG&#x27; (Motion-JPEG)&#xA;        Field             : None&#xA;        Bytes per Line    : 0&#xA;        Size Image        : 4147789&#xA;        Colorspace        : sRGB&#xA;        Transfer Function : Default (maps to sRGB)&#xA;        YCbCr/HSV Encoding: Default (maps to ITU-R 601)&#xA;        Quantization      : Default (maps to Full Range)&#xA;        Flags             :&#xA;

    &#xA;

    I'm not sure what could be wrong with the format/camera and I don't think I have enough information to figure it out.

    &#xA;

    I tried to use ffmpeg instead of opencv and also change a few settings in opencv&#x27;s cup config.

    &#xA;

  • FFMPEG split video with equal frames in the splits....?

    17 janvier 2017, par Gurinderbeer Singh

    I am using FFMPEG to split a video file by using the following command :

    ffmpeg -i input.mp4 -c copy -segment_times 600,600 -f segment out%d.mp4

    This command divides video based on time.. But even if output splits are of same duration, number of frames are different in the splits..

    Is there a way of splitting a video file without recoding so that number of frames are equal in splitted files.?

    Even if it use some other tool than FFMPEG.

    For example : input.mp4 of duration 200 seconds has 5000 frames.

    Can we split this into :

    input1.mp4 having 2500 frames
    and input2.mp4 having 2500 frames.

    It doesn’t matter if duration of output splits is different..

    Please help.....!