Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (69)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (8617)

  • ffmpeg : How does one designate what parts of an overlay video stream let the underlay video stream show through ?

    21 février 2023, par Walt Howard

    Stream 0 is a rotating planet created using povray. (https://www.povray.org/)

    


    Stream 1 is just a static jpeg of stars.

    


    I'm using overlay like this :

    


    nice ffmpeg -i protoplanet.mp4 -i stars.jpg \
         -filter_complex "[0:v]scale=1024:768[scaled];[1:v][scaled]overlay=0:0" \
         -movflags faststart -pix_fmt yuv420p -r 10 planet.mp4


    


    I had it working when I used the output from povray directly. I didn't have to know why that worked, because it just did. However, after adding some post processing to the planet video, the entire video has no alpha channel (educated guess) so the background stream (Stream 1) cannot show through.

    


    The post processing I did was this (which works great) : https://www.reddit.com/r/ffmpeg/comments/im2mkp/creating_a_retro_glow_effect_with_ffmpeg/

    


    But that made the video unable to have an overlay background possibly due to it destroying the alpha channel and turning many of the blacks to dark grey.

    


    I can merge the pure POVRAY output and the background and then add the glow effect, but it adds the effect to the background stars and captions also which ruins the effect to some degree. I want to glo-ify the planet first, then stick it on a pure starfield background.

    


    In thinking this over I may have to recreate the alpha channel after adding the glow effect, using a nearest match to black and dark grey to alpha.

    


    Hmmm. It might be a codec issue as I didn't specify any -c:v in any of my commands....

    


  • React Native Expo File System : open failed : ENOENT (No such file or directory)

    9 février 2023, par coloraday

    I'm getting this error in a bare React Native project :

    


    Possible Unhandled Promise Rejection (id: 123):
Error: /data/user/0/com.filsufius.VisionishAItest/files/image-new-♥d.jpg: open failed: ENOENT (No such file or directory)


    


    The same code was saving to File System with no problem yesterday, but today as you can see I am getting an ENOENT error, plus I am getting these funny heart shapes ♥d in the path. Any pointers as to what might be causing this, please ? I use npx expo run:android to builld app locally and expo start —dev-client to run on a physical Android device connected through USB.

    


    import { Image, View, Text, StyleSheet } from "react-native";&#xA;import * as FileSystem from "expo-file-system";&#xA;import RNFFmpeg from "react-native-ffmpeg";&#xA;import * as tf from "@tensorflow/tfjs";&#xA;import * as cocossd from "@tensorflow-models/coco-ssd";&#xA;import { decodeJpeg, bundleResourceIO } from "@tensorflow/tfjs-react-native";&#xA;&#xA;const Record = () => {&#xA;  const [frames, setFrames] = useState([]);&#xA;  const [currentFrame, setCurrentFrame] = useState(0);&#xA;  const [model, setModel] = useState(null);&#xA;  const [detections, setDetections] = useState([]);&#xA;&#xA;  useEffect(() => {&#xA;    const fileName = "image-new-%03d.jpg";&#xA;    const outputPath = FileSystem.documentDirectory &#x2B; fileName;&#xA;    RNFFmpeg.execute(&#xA;      "-y -i https://res.cloudinary.com/dannykeane/video/upload/sp_full_hd/q_80:qmax_90,ac_none/v1/dk-memoji-dark.m3u8 -vf fps=25 -f mjpeg " &#x2B;&#xA;        outputPath&#xA;    )&#xA;      .then((result) => {&#xA;        console.log("Extraction succeeded:", result);&#xA;        FileSystem.readDirectoryAsync(FileSystem.documentDirectory).then(&#xA;          (files) => {&#xA;            setFrames(&#xA;              files&#xA;                .filter((file) => file.endsWith(".jpg"))&#xA;                .sort((a, b) => {&#xA;                  const aNum = parseInt(a.split("-")[2].split(".")[0]);&#xA;                  const bNum = parseInt(b.split("-")[2].split(".")[0]);&#xA;                  return aNum - bNum;&#xA;                })&#xA;            );&#xA;          }&#xA;        );&#xA;      })&#xA;      .catch((error) => {&#xA;        console.error("Extraction failed:", error);&#xA;      });&#xA;  }, []);&#xA;&#xA;  useEffect(() => {&#xA;    tf.ready().then(() => cocossd.load().then((model) => setModel(model)));&#xA;  }, []);&#xA;  useEffect(() => {&#xA;    if (frames.length &amp;&amp; model) {&#xA;      const intervalId = setInterval(async () => {&#xA;        setCurrentFrame((currentFrame) =>&#xA;          currentFrame === frames.length - 1 ? 0 : currentFrame &#x2B; 1&#xA;        );&#xA;        const path = FileSystem.documentDirectory &#x2B; frames[currentFrame];&#xA;        const imageAssetPath = await FileSystem.readAsStringAsync(path, {&#xA;          encoding: FileSystem.EncodingType.Base64,&#xA;        });&#xA;        const imgBuffer = tf.util.encodeString(imageAssetPath, "base64").buffer;&#xA;        const imageData = new Uint8Array(imgBuffer);&#xA;        const imageTensor = decodeJpeg(imageData, 3);&#xA;        console.log("after decodeJpeg.");&#xA;        const detections = await model.detect(imageTensor);&#xA;        console.log(detections);&#xA;        setDetections(detections);&#xA;      }, 100);&#xA;      return () => clearInterval(intervalId);&#xA;    }&#xA;  }, [frames, model]);&#xA;&#xA;  &#xA;  return (&#xA;    <view style="{styles.container}">&#xA;      &#xA;      <view style="{styles.predictions}">&#xA;        {detections.map((p, i) => (&#xA;          <text key="{i}" style="{styles.text}">&#xA;            {p.class}: {(p.score * 100).toFixed(2)}%&#xA;          </text>&#xA;        ))}&#xA;      </view>&#xA;    </view>&#xA;  );&#xA;};&#xA;&#xA;const styles = StyleSheet.create({&#xA;  container: {&#xA;    flex: 1,&#xA;    alignItems: "center",&#xA;    justifyContent: "center",&#xA;  },&#xA;  image: {&#xA;    width: 300,&#xA;    height: 300,&#xA;    resizeMode: "contain",&#xA;  },&#xA;  predictions: {&#xA;    width: 300,&#xA;    height: 100,&#xA;    marginTop: 20,&#xA;  },&#xA;  text: {&#xA;    fontSize: 14,&#xA;    textAlign: "center",&#xA;  },&#xA;});&#xA;&#xA;export default Record;```&#xA;

    &#xA;

  • How to interpret ffmpeg recording options available for a webcam (directshow) ?

    5 janvier 2023, par Jones659

    I am trying to create a GUI for personal use, that allows someone to customise recording and converting options of ffmpeg, without directly using the command line. At the moment, I am learning about different parameters and flags in ffmpeg.

    &#xA;

    Apologies in advance if I end up asking some stupid questions, I am on a learning journey at the moment, unfortunately not all of this info is available online in an easily understandable way.

    &#xA;

    I have a USB webcam which reported having the following options available to it :

    &#xA;

    [dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=640x480 fps=5 max s=640x480 fps=30&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=640x480 fps=5 max s=640x480 fps=30 (tv, bt470bg/bt709/unknown, topleft) chroma_location=topleft&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=352x288 fps=5 max s=352x288 fps=30&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=352x288 fps=5 max s=352x288 fps=30 (tv, bt470bg/bt709/unknown, topleft)&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=320x240 fps=5 max s=320x240 fps=30&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=320x240 fps=5 max s=320x240 fps=30 (tv, bt470bg/bt709/unknown, topleft)&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=176x144 fps=5 max s=176x144 fps=30&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=176x144 fps=5 max s=176x144 fps=30 (tv, bt470bg/bt709/unknown, topleft)&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=160x120 fps=5 max s=160x120 fps=30&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=160x120 fps=5 max s=160x120 fps=30 (tv, bt470bg/bt709/unknown, topleft)&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=1280x1024 fps=5 max s=1280x1024 fps=9&#xA;[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=1280x1024 fps=5 max s=1280x1024 fps=9 (tv, bt470bg/bt709/unknown, topleft)&#xA;

    &#xA;

    I just want to get to the bottom of how I should interpret this, apologies that I will ask multiple questions :

    &#xA;

      &#xA;
    1. The fact that both resolution and fps have a min and max value (for every option) seems to imply that these two parameters are supposably uncontrollably variable, right ? In practice, the fps has been variable depending on brightness, however the resolution has not been - is it safe to assume that video imaging devices (especially such as a webcam) do not have variable resolution ?

      &#xA;

    2. &#xA;

    3. Secondly, why is it that every option is listed twice, except half of them specify extra info, such as color_range, color_space, and chroma_location ? Is this just a quirk ? Surely those extra parameter options should not be discarded ?

      &#xA;

    4. &#xA;

    5. It's hard to know how to make sense of this, but or example : the fact that only "tv" is ever shown, does that impliy that the webcam can only ever do limited color range, and there is no point trying to get full 0,255 out of it ? I read somewhere that "pc" implies full range of 0-255, whereas "tv" implies a range of 16-235

      &#xA;

    6. &#xA;

    7. With regards to color space, is it acceptable to record the webcam as raw (un-encoded), and then later convert to a different color space later down the line ? Which approach to dealing with the color-space yields the least amount of lost color ? My only previous experience with color spaces is in the realm of images - where for example, it makes no sense to convert sRGB to ROMM16 RGB, because you're going to a color space which has wider coverage, and extra colors won't be created out of thin air, you'd want to go once from raw to a color space, and avoid converting between color spaces afterwards. Also, what does "unknown" mean in the color space options ?

      &#xA;

    8. &#xA;

    &#xA;

    Here's the culmination of some research/testing i've done, is there anything correct, or seriously wrong, in the conclusions and assumptions I've made below ?

    &#xA;

    My understanding of pixel_format is as follows : when you're recording, (even to raw), you specify the pixel format using something like "-pixel_format yuyv422", this is a "packed", not "planar" format, which is produced by the webcam. When you convert from raw to something like mkv using libx264, you can't specify a "packed" pixel format such as "yuyv422", but must instead use an appropriate planar counterpart, such as "yuv422p", which would be specified using "-pix_fmt yuv422p".

    &#xA;

    I did a raw recording of the webcam (in which I recorded a bright light, in the dark), I didn't set any of the options in the brackets above. I then converted this video using libx264 with the flags "-dst_range 1 -color_range 2" which I saw elsewhere on the internet.

    &#xA;

    Taking a screenshot of this video using vlc, and putting it through imagemagick identify -verbose, shows that the color range of the screenshot is 0,255, as for the video itself, "MediaInfo" reports "color range:Full", VLC's codec info says "Decoded format : Planar 4:2:2 YUV full scale - is this info worth anything, or is it just meta-data that the video got tagged with ?

    &#xA;

    At first I was happy about imagemagick's color range reporting, but I am thinking now, the 0, 255 range could be a result of "overshoot" values produced by the camera, which aren't actually supposed to be mapped linearly.

    &#xA;

    I appreciate that this probably feels like some school-kiddy offloading their homework assignment to avoid doing work, but I hope it can be seen that I've looked into these things prior to putting this post together.

    &#xA;

    Thanks in advance, if anyone takes the time to answer anything.

    &#xA;