Recherche avancée

Médias (1)

Mot : - Tags -/publier

Autres articles (70)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (8595)

  • Revision c82de3bede : Extending ext_tx expt to include dst variants Extends the ext-tx experiment to

    15 novembre 2014, par Deb Mukherjee

    Changed Paths :
     Modify /vp9/common/vp9_blockd.h


     Modify /vp9/common/vp9_entropymode.c


     Modify /vp9/common/vp9_entropymode.h


     Modify /vp9/common/vp9_enums.h


     Modify /vp9/common/vp9_idct.c


     Modify /vp9/decoder/vp9_decodeframe.c


     Modify /vp9/decoder/vp9_decodemv.c


     Modify /vp9/encoder/vp9_bitstream.c


     Modify /vp9/encoder/vp9_encodeframe.c


     Modify /vp9/encoder/vp9_encodemb.c


     Modify /vp9/encoder/vp9_encoder.h


     Modify /vp9/encoder/vp9_rd.c


     Modify /vp9/encoder/vp9_rdopt.c



    Extending ext_tx expt to include dst variants

    Extends the ext-tx experiment to include regular and flipped
    DST variants. A total of 9 transforms are thus possible for
    each inter block with transform size <= 16x16.

    In this patch currently only the four ADST_ADST variants
    (flipped or non-flipped in both dimensions) are enabled
    for inter blocks.

    The gain with the ext-tx experiment grows to +1.12 on derflr.
    Further experiments are underway.

    Change-Id : Ia2ed19a334face6135b064748f727fdc9db278ec

  • lavu : fix memory leaks by using a mutex instead of atomics

    14 novembre 2014, par wm4
    lavu : fix memory leaks by using a mutex instead of atomics
    

    The buffer pool has to atomically add and remove entries from the linked
    list of available buffers. This was done by removing the entire list
    with a CAS operation, working on it, and then setting it back again
    (using a retry-loop in case another thread was doing the same thing).

    This could effectively cause memory leaks : while a thread was working on
    the buffer list, other threads would allocate new buffers, increasing
    the pool’s total size. There was no real leak, but since these extra
    buffers were not needed, but not free’d either (except when the buffer
    pool was destroyed), this had the same effects as a real leak. For some
    reason, growth was exponential, and could easily kill the process due
    to OOM in real-world uses.

    Fix this by using a mutex to protect the list operations. The fancy
    way atomics remove the whole list to work on it is not needed anymore,
    which also avoids the situation which was causing the leak.

    Signed-off-by : Anton Khirnov <anton@khirnov.net>

    • [DBH] libavutil/buffer.c
    • [DBH] libavutil/buffer_internal.h
  • Parallelize Youtube video frame download using yt-dlp and cv2

    4 mars 2023, par zulle99

    My task is to download multiple sequences of successive low resolution frames of Youtube videos.

    &#xA;

    I summarize the main parts of the process :

    &#xA;

      &#xA;
    • Each bag of shots have a dimension of half a second (depending on the current fps)
    • &#xA;

    • In order to grab useful frames I've decided to remove the initial and final 10% of each video since it is common to have an intro and outro. Moreover
    • &#xA;

    • I've made an array of pair of initial and final frame to distribute the load on multiple processes using ProcessPoolExecutor(max_workers=multiprocessing.cpu_count())
    • &#xA;

    • In case of failure/exception I completly remove the relative directory
    • &#xA;

    &#xA;

    The point is that it do not scale up, since while running I noticesd that all CPUs had always a load lower that the 20% more or less. In addition since with these shots I have to run multiple CNNs, to prevent overfitting it is suggested to have a big dataset and not a bounch of shots.

    &#xA;

    Here it is the code :

    &#xA;

    import yt_dlp&#xA;import os&#xA;from tqdm import tqdm&#xA;import cv2&#xA;import shutil&#xA;import time&#xA;import random&#xA;from concurrent.futures import ProcessPoolExecutor&#xA;import multiprocessing&#xA;import pandas as pd&#xA;import numpy as np&#xA;from pathlib import Path&#xA;import zipfile&#xA;&#xA;&#xA;# PARAMETERS&#xA;percentage_train_test = 50&#xA;percentage_bag_shots = 20&#xA;percentage_to_ignore = 10&#xA;&#xA;zip_f_name = f&#x27;VideoClassificationDataset_{percentage_train_test}_{percentage_bag_shots}_{percentage_to_ignore}&#x27;&#xA;dataset_path = Path(&#x27;/content/VideoClassificationDataset&#x27;)&#xA;&#xA;# DOWNOAD ZIP FILES&#xA;!wget --no-verbose https://github.com/gtoderici/sports-1m-dataset/archive/refs/heads/master.zip&#xA;&#xA;# EXTRACT AND DELETE THEM&#xA;!unzip -qq -o &#x27;/content/master.zip&#x27; &#xA;!rm &#x27;/content/master.zip&#x27;&#xA;&#xA;DATA = {&#x27;train_partition.txt&#x27;: {},&#xA;        &#x27;test_partition.txt&#x27;: {}}&#xA;&#xA;LABELS = []&#xA;&#xA;train_dict = {}&#xA;test_dict = {}&#xA;&#xA;path = &#x27;/content/sports-1m-dataset-master/original&#x27;&#xA;&#xA;for f in os.listdir(path):&#xA;  with open(path &#x2B; &#x27;/&#x27; &#x2B; f) as f_txt:&#xA;    lines = f_txt.readlines()&#xA;    for line in lines:&#xA;      splitted_line = line.split(&#x27; &#x27;)&#xA;      label_indices = splitted_line[1].rstrip(&#x27;\n&#x27;).split(&#x27;,&#x27;) &#xA;      DATA[f][splitted_line[0]] = list(map(int, label_indices))&#xA;&#xA;with open(&#x27;/content/sports-1m-dataset-master/labels.txt&#x27;) as f_labels:&#xA;  LABELS = f_labels.read().splitlines()&#xA;&#xA;&#xA;TRAIN = DATA[&#x27;train_partition.txt&#x27;]&#xA;TEST = DATA[&#x27;test_partition.txt&#x27;]&#xA;print(&#x27;Original Train Test length: &#x27;, len(TRAIN), len(TEST))&#xA;&#xA;# sample a subset percentage_train_test&#xA;TRAIN = dict(random.sample(TRAIN.items(), (len(TRAIN)*percentage_train_test)//100))&#xA;TEST = dict(random.sample(TEST.items(), (len(TEST)*percentage_train_test)//100))&#xA;&#xA;print(f&#x27;Sampled {percentage_train_test} Percentage  Train Test length: &#x27;, len(TRAIN), len(TEST))&#xA;&#xA;&#xA;if not os.path.exists(dataset_path): os.makedirs(dataset_path)&#xA;if not os.path.exists(f&#x27;{dataset_path}/train&#x27;): os.makedirs(f&#x27;{dataset_path}/train&#x27;)&#xA;if not os.path.exists(f&#x27;{dataset_path}/test&#x27;): os.makedirs(f&#x27;{dataset_path}/test&#x27;)&#xA;

    &#xA;

    Function to extract a sequence of continuous frames :

    &#xA;

    def extract_frames(directory, url, idx_bag, start_frame, end_frame):&#xA;  capture = cv2.VideoCapture(url)&#xA;  count = start_frame&#xA;&#xA;  capture.set(cv2.CAP_PROP_POS_FRAMES, count)&#xA;  os.makedirs(f&#x27;{directory}/bag_of_shots{str(idx_bag)}&#x27;)&#xA;&#xA;  while count &lt; end_frame:&#xA;&#xA;    ret, frame = capture.read()&#xA;&#xA;    if not ret: &#xA;      shutil.rmtree(f&#x27;{directory}/bag_of_shots{str(idx_bag)}&#x27;)&#xA;      return False&#xA;&#xA;    filename = f&#x27;{directory}/bag_of_shots{str(idx_bag)}/shot{str(count - start_frame)}.png&#x27;&#xA;&#xA;    cv2.imwrite(filename, frame)&#xA;    count &#x2B;= 1&#xA;&#xA;  capture.release()&#xA;  return True&#xA;

    &#xA;

    Function to spread the load along multiple processors :

    &#xA;

    def video_to_frames(video_url, labels_list, directory, dic, percentage_of_bags):&#xA;  url_id = video_url.split(&#x27;=&#x27;)[1]&#xA;  path_until_url_id = f&#x27;{dataset_path}/{directory}/{url_id}&#x27;&#xA;  try:   &#xA;&#xA;    ydl_opts = {&#xA;        &#x27;ignoreerrors&#x27;: True,&#xA;        &#x27;quiet&#x27;: True,&#xA;        &#x27;nowarnings&#x27;: True,&#xA;        &#x27;simulate&#x27;: True,&#xA;        &#x27;ignorenoformatserror&#x27;: True,&#xA;        &#x27;verbose&#x27;:False,&#xA;        &#x27;cookies&#x27;: &#x27;/content/all_cookies.txt&#x27;,&#xA;        #https://stackoverflow.com/questions/63329412/how-can-i-solve-this-youtube-dl-429&#xA;    }&#xA;    ydl = yt_dlp.YoutubeDL(ydl_opts)&#xA;    info_dict = ydl.extract_info(video_url, download=False)&#xA;&#xA;    if(info_dict is not None and  info_dict[&#x27;fps&#x27;] >= 20):&#xA;      # I must have a least 20 frames per seconds since I take half of second bag of shots for every video&#xA;&#xA;      formats = info_dict.get(&#x27;formats&#x27;, None)&#xA;&#xA;      # excluding the initial and final 10% of each video to avoid noise&#xA;      video_length = info_dict[&#x27;duration&#x27;] * info_dict[&#x27;fps&#x27;]&#xA;&#xA;      shots = info_dict[&#x27;fps&#x27;] // 2&#xA;&#xA;      to_ignore = (video_length * percentage_to_ignore) // 100&#xA;      new_len = video_length - (to_ignore * 2)&#xA;      tot_stored_bags = ((new_len // shots) * percentage_of_bags) // 100   # ((total_possbile_bags // shots) * percentage_of_bags) // 100&#xA;      if tot_stored_bags == 0: tot_stored_bags = 1 # minimum 1 bag of shots&#xA;&#xA;      skip_rate_between_bags = (new_len - (tot_stored_bags * shots)) // (tot_stored_bags-1) if tot_stored_bags > 1 else 0&#xA;&#xA;      chunks = [[to_ignore&#x2B;(bag*(skip_rate_between_bags&#x2B;shots)), to_ignore&#x2B;(bag*(skip_rate_between_bags&#x2B;shots))&#x2B;shots] for bag in range(tot_stored_bags)]&#xA;      # sequence of [[start_frame, end_frame], [start_frame, end_frame], [start_frame, end_frame], ...]&#xA;&#xA;&#xA;      # ----------- For the moment I download only shots form video that has 144p resolution -----------&#xA;&#xA;      res = {&#xA;          &#x27;160&#x27;: &#x27;144p&#x27;,&#xA;          &#x27;133&#x27;: &#x27;240p&#x27;,&#xA;          &#x27;134&#x27;: &#x27;360p&#x27;,&#xA;          &#x27;135&#x27;: &#x27;360p&#x27;,&#xA;          &#x27;136&#x27;: &#x27;720p&#x27;&#xA;      }&#xA;&#xA;      format_id = {}&#xA;      for f in formats: format_id[f[&#x27;format_id&#x27;]] = f&#xA;      #for res in resolution_id:&#xA;      if list(res.keys())[0] in list(format_id.keys()):&#xA;          video = format_id[list(res.keys())[0]]&#xA;          url = video.get(&#x27;url&#x27;, None)&#xA;          if(video.get(&#x27;url&#x27;, None) != video.get(&#x27;manifest_url&#x27;, None)):&#xA;&#xA;            if not os.path.exists(path_until_url_id): os.makedirs(path_until_url_id)&#xA;&#xA;            with ProcessPoolExecutor(max_workers=multiprocessing.cpu_count()) as executor:&#xA;              for idx_bag, f in enumerate(chunks): &#xA;                res = executor.submit(&#xA;                  extract_frames, directory = path_until_url_id, url = url, idx_bag = idx_bag, start_frame = f[0], end_frame = f[1])&#xA;                &#xA;                if res.result() is True: &#xA;                  l = np.zeros(len(LABELS), dtype=int) &#xA;                  for label in labels_list: l[label] = 1&#xA;                  l = np.append(l, [shots]) # appending the number of shots taken in the list before adding it on the dictionary&#xA;&#xA;                  dic[f&#x27;{directory}/{url_id}/bag_of_shots{str(idx_bag)}&#x27;] = l.tolist()&#xA;&#xA;&#xA;  except Exception as e:&#xA;    shutil.rmtree(path_until_url_id)&#xA;    pass&#xA;

    &#xA;

    Download of TRAIN bag of shots :

    &#xA;

    start_time = time.time()&#xA;pbar = tqdm(enumerate(TRAIN.items()), total = len(TRAIN.items()), leave=False)&#xA;&#xA;for _, (url, labels_list) in pbar: video_to_frames(&#xA;  video_url = url, labels_list = labels_list, directory = &#x27;train&#x27;, dic = train_dict, percentage_of_bags = percentage_bag_shots)&#xA;&#xA;print("--- %s seconds ---" % (time.time() - start_time))&#xA;

    &#xA;

    Download of TEST bag of shots :

    &#xA;

    start_time = time.time()&#xA;pbar = tqdm(enumerate(TEST.items()), total = len(TEST.items()), leave=False)&#xA;&#xA;for _, (url, labels_list) in pbar: video_to_frames(&#xA;  video_url = url, labels_list = labels_list, directory = &#x27;test&#x27;, dic = test_dict, percentage_of_bags = percentage_bag_shots)&#xA;&#xA;print("--- %s seconds ---" % (time.time() - start_time))&#xA;

    &#xA;

    Save the .csv files

    &#xA;

    train_df = pd.DataFrame.from_dict(train_dict, orient=&#x27;index&#x27;, dtype=int).reset_index(level=0)&#xA;train_df = train_df.rename(columns={train_df.columns[-1]: &#x27;shots&#x27;})&#xA;train_df.to_csv(&#x27;/content/VideoClassificationDataset/train.csv&#x27;, index=True)&#xA;&#xA;test_df = pd.DataFrame.from_dict(test_dict, orient=&#x27;index&#x27;, dtype=int).reset_index(level=0)&#xA;test_df = test_df.rename(columns={test_df.columns[-1]: &#x27;shots&#x27;})&#xA;test_df.to_csv(&#x27;/content/VideoClassificationDataset/test.csv&#x27;, index=True)&#xA;

    &#xA;