Stephen 52 Yahoo Com Gmail Com Mail Com 2020 21 Txt -
# 3. Numbers numbers = [int(t) for t in tokens if t.isdigit()] features['numbers_found'] = numbers features['num_count'] = len(numbers) if numbers: features['num_sum'] = sum(numbers) features['num_avg'] = sum(numbers)/len(numbers)
return features features = extract_deep_features("stephen 52 yahoo com gmail com mail com 2020 21 txt") Step 3 – Output the deep features for k, v in features.items(): print(f"{k}: {v}") Output example:
# 1. Basic stats features['token_count'] = len(tokens) features['char_count'] = len(text) features['digit_count'] = sum(c.isdigit() for c in text) features['alpha_count'] = sum(c.isalpha() for c in text) stephen 52 yahoo com gmail com mail com 2020 21 txt
# 10. Text entropy (as a measure of unpredictability) import math freq = {} for ch in text: freq[ch] = freq.get(ch, 0) + 1 entropy = -sum((count/len(text)) * math.log2(count/len(text)) for count in freq.values()) features['entropy'] = round(entropy, 3)
# 6. Year detection (1900-2030) years = [n for n in numbers if 1900 <= n <= 2030] features['years_found'] = years Text entropy (as a measure of unpredictability) import
# 9. Embedded feature: "year + number" combo if len(years) == 1 and len(numbers) > 1: other_nums = [n for n in numbers if n not in years] if other_nums: features['year_num_pair'] = (years[0], other_nums[0])
# 7. File extension hint if 'txt' in tokens: features['file_extension'] = 'txt' features['looks_like_filename'] = True else: features['looks_like_filename'] = False = n <
"stephen 52 yahoo com gmail com mail com 2020 21 txt" A deep feature in machine learning or data processing typically means extracting meaningful, higher-level attributes from raw input — going beyond simple keyword extraction into inferred patterns, relationships, or embeddings.