Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (44)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (6070)

  • Tensorflow Video Parsing : ValueError : Convert None with Unsupported Type of class 'NoneType

    17 février 2024, par John Siddarth

    I am trying to learn how to use classifications on video data. I try to run an example from Tensorflow.

    


    I installed related tools using this command :

    


    # The way this tutorial uses the `TimeDistributed` layer requires TF>=2.10
pip install -U "tensorflow>=2.10.0"

pip install remotezip tqdm opencv-python
pip install -q git+https://github.com/tensorflow/docs


    


    Download the video file by curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv.

    


    The code :

    


    import tqdm
import random
import pathlib
import itertools
import collections

import os
import cv2
import numpy as np
import remotezip as rz

import tensorflow as tf

# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request
from tensorflow_docs.vis import embed

def format_frames(frame, output_size):
  """
    Pad and resize an image from a video.

    Args:
      frame: Image that needs to resized and padded. 
      output_size: Pixel size of the output frame image.

    Return:
      Formatted frame with padding of specified output size.
  """
  frame = tf.image.convert_image_dtype(frame, tf.float32)
  frame = tf.image.resize_with_pad(frame, *output_size)
  return frame

def frames_from_video_file(video_path, n_frames, output_size = (224,224), frame_step = 15):
  """
    Creates frames from each video file present for each category.

    Args:
      video_path: File path to the video.
      n_frames: Number of frames to be created per video file.
      output_size: Pixel size of the output frame image.

    Return:
      An NumPy array of frames in the shape of (n_frames, height, width, channels).
  """
  # Read each video frame by frame
  result = []
  src = cv2.VideoCapture(str(video_path))  

  video_length = src.get(cv2.CAP_PROP_FRAME_COUNT)

  need_length = 1 + (n_frames - 1) * frame_step

  if need_length > video_length:
    start = 0
  else:
    max_start = video_length - need_length
    start = random.randint(0, max_start + 1)

  src.set(cv2.CAP_PROP_POS_FRAMES, start)
  # ret is a boolean indicating whether read was successful, frame is the image itself
  ret, frame = src.read()
  result.append(format_frames(frame, output_size))

  for _ in range(n_frames - 1):
    for _ in range(frame_step):
      ret, frame = src.read()
    if ret:
      frame = format_frames(frame, output_size)
      result.append(frame)
    else:
      result.append(np.zeros_like(result[0]))
  src.release()
  result = np.array(result)[..., [2, 1, 0]]

  return result

video_path = "End_of_a_jam.ogv"

sample_video = frames_from_video_file(video_path, n_frames = 10)
sample_video.shape


    


    Here are the summary of the troubleshooting I did :

    


      

    • Check File Path : I verified that the file path to the video was correct and accessible from the Python environment.

      


    • 


    • Verify File Permissions : I ensured that the video file had the necessary permissions to be read by the Python code.

      


    • 


    • Test with Absolute Path : I attempted to use an absolute file path to access the video file to eliminate any ambiguity in the file's location.

      


    • 


    • Check File Format and Encoding : I examined the video file to ensure it was in a supported format and encoded properly for reading by OpenCV.

      


    • 


    


    What could be a cause ?

    


  • libavfilter/vf_dnn_detect : Use class confidence to filt boxes

    17 janvier 2024, par Wenbin Chen
    libavfilter/vf_dnn_detect : Use class confidence to filt boxes
    

    Use class confidence instead of box_score to filt boxes, which is more
    accurate. Class confidence is obtained by multiplying class probability
    distribution and box_score.

    Signed-off-by : Wenbin Chen <wenbin.chen@intel.com>
    Reviewed-by : Guo Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/vf_dnn_detect.c
  • avformat/options : add missing disposition flag to AVStream class options

    25 octobre 2023, par James Almer
    avformat/options : add missing disposition flag to AVStream class options
    

    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/options.c
    • [DH] libavformat/version.h